Tip

SUSE Linux administration: How to set up a cluster file system

Have you ever tried to let multiple nodes write to the same shares file system concurrently? That won't work on normal file systems. The cluster file system, OCFS2, allows you to do just

    Requires Free Membership to View

that. Using OCFS2, you will benefit from greater flexibility in setting up a cluster environment. In this article, you'll learn how to set it up in a SUSE Linux Enterprise Server (SLES) 10 environment.

First of all, you don't have to use OCFS2 for all cluster resources. It all depends on what you are doing with the resources. If you configure the resources in an active/passive configuration, where only one instance of the resource is active at any given time, you don't need OCFS2. In that case, it's better to use ext3 instead.

If, however, you are using resources in an active/active configuration -- which happens often in a database environment -- all instances of the resource need to be capable of writing to the same file system at the same time. That's exactly what OCFS2 is created to do.

Setting up a shared storage device

A cluster file system like OCFS2 is installed on a shared storage device. I assume that you already have your SAN installed and configured. I'll also assume that all necessary software packages are in place already. If the latter is not the case, you can use YaST2 to install the OCFS2 packages on all nodes.

Make sure that the shared storage device on all nodes that you want to configure for OCFS2 is visible. You can do this by using the fdisk -l command. Use a partitioning utility to create a partition on the shared device. For example, to create a 10 GB partition on the storage device /dev/sdb, use the following commands:

  • As root, use fdisk /dev/sdb
  • Choose n to create a new partition. Next press p for a primary partition.
  • Now enter a 1 to specify the partition number.
  • Press Enter to accept the default starting cylinder of the partition.
  • Enter +10G to make it a 10 Gigabytes partition.
  • From the fdisk main menu, press w to write the new parition on disk. This will close the fdisk interface.
  • Now use the partprobe command on all nodes that are involved in the OCFS2-setup. This will update the partition information on all nodes and make sure that the OCFS2 file system is accessible. You now have to make sure that all required services are started to set up OCFS2.

Configuring OCFS2

To get started with setting up OCF2, use the command rco2cb enable on all nodes involved in the cluster setup. Now, on any node involved in the setup, start the OCFS2 Console utility by running the ocfs2console command.

In the OCFS2 Console Utility, select Tasks > Format. Next, select the /dev/sdb1 partition you've just created. Then, after accepting all default values, click OK to continue. Formatting will now start. It will take some time before it completes.

Once the partition is formatted, you have to set up the OCFS2 cluster configuration. To do this, from the OCFS2 Console, click Cluster > Configure Nodes. Use the Add button to add all nodes on which you want to set up the OCFS2 file system. Click Apply followed by Close to close the Node Configuration screen.

Now, select the device that you have formatted for use of OCFS2 and click Mount. Enter the directory where you want to mount it; /mnt is fine if it's just for a trial. This mounts the OCFS2 file system on the node where you are using OCFS2 Console. From the OCFS2 Console, click Cluster > Propagate to push the configuration to the other nodes that are involved. To do this, SSH is used. Reply to all prompts the SSH program gives you to copy the configuration around.

More SUSE Server tips:
Migrating to Novell's SUSE Linux: Lessons learned in a successful project

Easing SUSE Server administration from the command line

Mount the OCFS2 file system on all other nodes as well. To do this, use the command mount -t ocfs2 /dev/sdb1 /mnt. After succesfully mounting the file system, you'll be able to write files to the mounted file system. Use the commands insserv o2cb and insserv ocfs2 on all nodes involved in the setup to make sure that the OCFS2 file system can be mounted after restarting your servers.

Once you have set up the OCFS2 file system succesfully on all nodes, you have to make sure that it is mounted automatically the next time you restart your server. To do this, first make a directory where you want to mount the file system after rebooting the server. I strongly advise using the same directory on all nodes involved. Next, edit /etc/fstab to make the server mount automatically when rebooting. When doing this, make sure that the file system has the option _netdev added in the options column. This option ensures that the file system is mounted only after the network has been enabled.

In this article you've learned how to set up OCFS2 as a shared file system in a cluster environment. This is a very useful solution in an active/active cluster environment where multiple nodes have to write to the same file system at the same time.
 

This was first published in March 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.