First of all, you don't have to use OCFS2 for all cluster resources. It all depends on what you are doing with the resources. If you configure the resources in an active/passive configuration, where only one instance of the resource is active at any given time, you don't need OCFS2. In that case, it's better to use ext3 instead.
If, however, you are using resources in an active/active configuration -- which happens often in a database environment -- all instances of the resource need to be capable of writing to the same file system at the same time. That's exactly what OCFS2 is created to do.
Setting up a shared storage device
A cluster file system like OCFS2 is installed on a shared storage device. I assume that you already have your SAN installed and configured. I'll also assume that all necessary software packages are in place already. If the latter is not the case, you can use YaST2 to install the OCFS2 packages on all nodes.
Make sure that the shared storage device on all nodes that you want to configure for OCFS2 is visible. You can do this by using the
- As root, use fdisk /dev/sdb
- Choose n to create a new partition. Next press p for a primary partition.
- Now enter a 1 to specify the partition number.
- Press Enter to accept the default starting cylinder of the partition.
- Enter +10G to make it a 10 Gigabytes partition.
- From the fdisk main menu, press w to write the new parition on disk. This will close the fdisk interface.
- Now use the partprobe command on all nodes that are involved in the OCFS2-setup. This will update the partition information on all nodes and make sure that the OCFS2 file system is accessible. You now have to make sure that all required services are started to set up OCFS2.
To get started with setting up OCF2, use the command rco2cb enable on all nodes involved in the cluster setup. Now, on any node involved in the setup, start the OCFS2 Console utility by running the ocfs2console command.
In the OCFS2 Console Utility, select Tasks > Format. Next, select the /dev/sdb1 partition you've just created. Then, after accepting all default values, click OK to continue. Formatting will now start. It will take some time before it completes.
Once the partition is formatted, you have to set up the OCFS2 cluster configuration. To do this, from the OCFS2 Console, click Cluster > Configure Nodes. Use the Add button to add all nodes on which you want to set up the OCFS2 file system. Click Apply followed by Close to close the Node Configuration screen.
Now, select the device that you have formatted for use of OCFS2 and click Mount. Enter the directory where you want to mount it; /mnt is fine if it's just for a trial. This mounts the OCFS2 file system on the node where you are using OCFS2 Console. From the OCFS2 Console, click Cluster > Propagate to push the configuration to the other nodes that are involved. To do this, SSH is used. Reply to all prompts the SSH program gives you to copy the configuration around.
Mount the OCFS2 file system on all other nodes as well. To do this, use the command mount -t ocfs2 /dev/sdb1 /mnt. After succesfully mounting the file system, you'll be able to write files to the mounted file system. Use the commands insserv o2cb and insserv ocfs2 on all nodes involved in the setup to make sure that the OCFS2 file system can be mounted after restarting your servers.
Once you have set up the OCFS2 file system succesfully on all nodes, you have to make sure that it is mounted automatically the next time you restart your server. To do this, first make a directory where you want to mount the file system after rebooting the server. I strongly advise using the same directory on all nodes involved. Next, edit /etc/fstab to make the server mount automatically when rebooting. When doing this, make sure that the file system has the option _netdev added in the options column. This option ensures that the file system is mounted only after the network has been enabled.
In this article you've learned how to set up OCFS2 as a shared file system in a cluster environment. This is a very useful solution in an active/active cluster environment where multiple nodes have to write to the same file system at the same time.
This was first published in March 2007