Editor's note: This is the fifth part in a series of articles on Unix-to-Linux migrations on SearchEnterpriseLinux.com.
After some initial testing, what are some of the major gotchas you should be looking at now? What kinds of errors or problems commonly arise during Unix-to-Linux migrations? Where should you turn for support? You can sidestep some pitfalls with a little vigilance and some good planning.
Identify potential hardware challenges
The biggest gotcha when considering Unix-to-Linux migrations are platform-dependent issues, especially when you're moving from RISC to x86 platforms. This is where the concept of endianess comes in.
Endianess refers to the ordering of memory used for data representation. It defines how a data element and its byes are stored in memory. The problem you may encounter is that x86 computers use little endian, while RISC systems use a lot of endian. If you're moving from RISC to x86 or the reverse, porting the code requires that the code be modified. These issues must be uncovered during the assessment stage when you were looking for platform-dependent constraints.
Some vendors have developed innovative solutions to get around this very issue. For example, IBM developed PowerVM Lx86, which is part of its midrange virtualization engine. It uses special software to automatically translate the instruction set to Power instructions so that they do not have to be compiled natively.
Although Linux could run on an IBM Power platform before, it had to be run natively and recompiled for the platform. This is no longer a problem. The translator, which is a part of PowerVM Lx86, transforms the x86 Linux calls to Power Linux calls through a three-step process of decoding, optimization and generation of code. This lends itself well to Web applications that repeat similar instructions because frequently used code is cached in memory and does not need to be translated.
Another area to consider is applications that require kernel extensions and device drivers. These are not easy candidates to support, partly because most kernel APIs do not follow any stringent standards. API calls, the number of arguments and the process of loading them into kernel extensions will all function differently on the new platform.
Gauge application suitability and availability for Linux
Most commercial and Web applications are suitable to run on Linux. The availability piece is another story.
Although nearly all vendors today have moved their Unix applications to Linux, it's critical to make sure that your off-the-shelf application has this support before considering a migration. If it doesn't, you do not want to be in the position of having to port that yourself. For applications developed in-house, you'll need a strong development team to help you migrate applications.
Ask colleagues who have done this before how well your applications are likely to move to Linux. Find out how they are running now. And don't be afraid to go straight to your vendor for help. Both Red Hat and SUSE offer programs to help with migration efforts.
Deployment errors or problems
What type of errors or problems might you see when doing ports? This is where proper testing is essential. Anything can go wrong during porting, so establish a test environment and a lab that tries to break your systems before they are deployed into production.
A few years ago, after my group performed a major migration that appeared to go well, we started getting phones calls regarding the ability of the payroll systems to process checks. This was alarming because we had gone through extensive unit testing as well as user testing and UAT. As it turned out, the problem was not because of anything we did on the migration side. It was because of incompatibility issues with some PC-based clients that were using an old version of an Oracle client.
This was an important part of our "lessons learned" document. In future migrations, I always made it a point to check the kind of clients that were accessing the server to ensure that this would not happen again.
Another problem is with Unix shell scripts. One would think that a Unix shell script that was written in Unix should work the same on Linux. This is not a correct assumption.
Any script in Unix that needs to interface with your application must be tested carefully. The standard shell for Linux is the bash shell, which is based on the original Unix Bourne shell. In our case, the Unix shell was the Korn shell. There were certain functions that did not come across correctly because of this miss. Always assume that your shell scripts will not work. Test each one that you have.
Getting support for problems
Support is dependent on the size of your IT department and the experience of your staff. Do you have several experienced Linux administrators who can pretty much do anything? Or are they mostly Unix administrators who have been trying to pick up Linux?
The level of support required correlates to the type of environment you have. Are your systems running CRM or payroll systems that can cause the company tens of thousands of dollars for every minute of downtime?
It's key to have vendor-specific support. Both Red Hat and Novell offer 24/7 support programs for their distributions. Some Unix hardware vendors, IBM for example, also offer their own support for Linux.
Get support from your hardware vendor, if possible. Moreover, it doesn't hurt to have OS support from both the vendor that supports your Linux distribution, such as Novell for SUSE, and your hardware vendor. Again, you need to take into account the financial impact of downtime on your organization as well as the experience of your staff when it comes to dealing with it.
The next and final part of the series shows you how to train your Unix staff to manage Linux environments.
ABOUT THE AUTHOR: Ken Milberg is the President and Managing consultant of PowerTCO (formerly known as Unix-Linux Solutions), a NY-based IBM Business Partner. He is a technology writer and site expert for techtarget.com and provides Linux technical information and support at SearchEnterpriseLinux.com.
This was first published in June 2010