By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Box is a fantastic example of where open source can work.
That said, if I were looking at RHEL, I wouldn't look at White Box.
Someone giving serious consideration to RHEL isn't buying it for Linux. They are buying it for the guarantee that Red Hat will jump on the problem and begin an effort to solve it.
White Box is a serious candidate for someone looking for the benefit of RHEL without buying the guarantee that someone will jump on a problem. Buying a guarantee isn't necessary if your server isn't mission critical. From an administrator's viewpoint, what are the advantages/disadvantages of working with the enterprise editions of SuSE and Red Hat?
It depends on what you're looking for. For someone just getting started with systems administration, I would recommend a desktop friendly OS like Mandrake, so that things 'just work' out of the box; you can break one part of the system at a time while you learn. The great thing about Linux is that you can turn a desktop system into a small server quite easily, which makes it a great environment for learning.
On the commercial side, support contract requirements are more likely going to dominate the decision here. What are your expectations out of the vendor and can the vendor meet them?
If your intent is to use the shipping versions of the systems the way that the vendor meant you to (you only apply vendor approved patches and keep the system up to date), then the vendor that most closely supports your technical requirements is going to make the most sense.
Keep in mind that once you drop down to the CLI (Command Line Interface); on most of these systems, they feel very alike. /etc/passwd just isn't very different from vendor to vendor. If your intent is to tweak or customize the distribution, and support from the vendor isn't your primary concern (or you feel you have enough in-house expertise and intend to keep them), then it really won't matter what you start with. They'll all feel alike once you're done. IT pros tell us that configuration and configuration management are big headaches. What configuration tasks are most often botched?
The most frequent source of 'uh-oh' problems that I see revolve around changes that don't have an immediate 'Yes, this worked,' kind of feedback. This includes the likes of editing boot-time scripts/configurations and backups. Both of these require some additional test after the action to see if the change worked where the additional test is not trivial. Could you offer an example of how a botched configuration can foul the works?
An example is when various 'try-this' steps take the system into a state that is not identical to when it was booted. When an administrator finally enters the magic command that works, he only adds the magic command to the appropriate rc [run commands] file without considering the other steps required -- or outright forgets to update the rc file. The next time the system boots, the resource isn't configured and things are broken. How can these mistakes be avoided?
The key to avoiding these kinds of mistakes is to double check such changes and, if appropriate, schedule a reboot to test the change. Ideally, a test machine that isn't production should have the test changed and tested on them first.
Invest in automation -- plain and simple. I have seen or personally been involved in situations where a small number of people have been able to administer a large number of systems through thoughtful scripts.
Another key to success is keeping servers as close to the same as possible. This of course makes scripting that much more possible since the same script can run on a large number of systems. What tips can you offer to IT shops that are migrating or moving up to a new version of Linux?
First and foremost, turn off things you don't need. This means going through and thinning out what you're starting at boot time.
If a machine is a dedicated Web server, you shouldn't need much more than Apache, SSH and a few base system tools (init, anacron, getty). Although the RAM and CPU benefits are obvious, they are not significant. What is significant is the reduced risk of something else not related to your server's primary function going wrong and giving you a bad day. Don't forget that inetd/xinetd is a superserver supporting other functions. If you don't need anything that inetd normally offers up, turn them off. Do you have any other favorite tricks?
One of my older, but still-liked tricks, is using named pipes. A named pipe is a special type of file that allows one process to write to it and another to read from it. When the writer closes the file, the reader will see the End of File and stop.
From the point of view of the processes, they are reading and writing to normal files. This can be handy when an application doesn't want to send their output to stdout or stderr, but you want to have another process read that output and immediately start working on it. A simple example:
[sshah@fp:~]$ mknod mypipe p
[sshah@fp:~]$ cat mypipe &
[sshah@fp:~]$ echo "Hello World" > mypipe
+ Done cat mypipe
I used Circa Red Hat 5 to restore an ext2 dump file from an Irix machine's tape file. Using mt on Irix, I went to the right file, cat the non-rewinding tape device to rsh. The rsh went to the Linux system and issued a "cat > namedpipefile." Waiting on the Linux machine was a restore command that was reading namedpipefile.