Tip

Setting up DRBD in an open source SAN: Open source SANs, part 2

As we established in part one of this series on open storage area networks (SANs), building an open source SAN provides a cost-effective alternative for companies with a tight

Requires Free Membership to View

budget. Now that we've established the merits and some of the important considerations in creating open source SANs, we'll explain how to set up the Distributed Replicated Block Device (DRBD) service, which allows for replicated storage in a SAN.

Check out parts 1 of this four-part tip on SANs plus related content below, and watch for 3 & 4 soon:

Build your own iSCSI SAN appliances, save money: Open source SANs, part 1

SAN consolidation reduces costs, boosts performance

Network architect: Building an affordable SAN on Linux

Configuring DRBD
Once your server is up and running based on the installation guidelines we discussed in part one, it's time to take the first true step in the SAN configuration and set up the DRBD device. First, ensure that you have a storage device available on each of the servers involved in setting up your SAN. In this article, I'm working on the /dev/system/DRBD device. This name can be different in your particular configuration. It is also a good idea to use Ron Terry's add_drbd_resource.sh script, which you can download from the Internet. This script automatically creates the configuration files for you, which prevents typing errors. The procedure below assumes that you have two servers, named san1 and san2. In configuring your own, you can replace these names with the names of your servers, of course.

  • Download the add_drbd_resource.sh script from Shell Scripts page. Put it in user root's home directory and make it executable using chmod +x add_drbd_resource.sh.
  • Run the add_drbd_resource.sh script from san1 and provide the following values:
  • Resource name: r0
  • Number of the next DRBD device: 0
  • Name of the block device to be mirrored: /dev/system/DRBD
  • Name of the first server: san1
  • IP address of the first server: (your server's IP address)
  • Name of the second server: san2
  • IP address of the second server: (your server's IP address)
  • TCP Port number: 7788
  • Syncer rate: This refers to the speed used when synchronizing changes between the DRBD devices. It can be up to as fast as the LAN connection between your servers. So use 100 Mb on a 100 MbLAN, or 1,000 Mb on a Gb LAN. Want to make sure that DRBD doesn't eat all available bandwidth? Set it somewhat lower than these amounts.
  • Synchronization protocol: use protocol C at all times because it gives you the optimal combination of speed and reliability.

After entering all required parameters, write the configuration to the /etc/drbd.conf file. This results in a file with contents much like that of the file from Listing 1 below:

Listing 1: Example of the drbd.conf file.
To see the text of Listing 1, click here

Now that the configuration is in place on san1, you can load the DRBD kernel module using modprobe drbd. If this gives you a "Module not found" error, make sure that you have installed the appropriate module for the kernel that you use. (To find out which kernel that is, use uname -r). Now that the kernel module is loaded, it is time to run a first test. To do so, you use the drbdadm command with the -d (dry run) option and the adjust argument. This will ensure that the DRBD device is synchronized with the configuration file that you've just created. To do this, now run the following command:

drbdadm -d adjust drbd0

This command yields some results. If they look like the output below, you are ready to copy the configuration to the second server.

drbdsetup /dev/drbd0 disk /dev/system/DRBD internal -1 --on-io-error=detach

drbdsetup /dev/drbd0 syncer --rate=100m --group=1 --al-extents=257

drbdsetup /dev/drbd0 net 192.168.1.230:7788 192.168.1.240:7788 C

Now that the DRBD device is up and running, you can copy it to the other server:

scp /etc/drbd.conf san2:/etc/drbd.conf

Next, you need to load the DRBD module and test the DRBD device on the second node as well. Enter the following two commands from san2:

modprobe drbd

drbdadm -d adjust drbd0

If this all looks good, you can now go on and start the DRBD device. First, add the service to your current run levels, using insserv drbd on both servers. Next, use rcdrbd start on both servers to start the service. This enables the DRBD device, but doesn't put it in a synchronized state. Next, enter the following command on one of the servers to tell this server that it should act as the primary (which is the active) server:

drbdsetup /dev/drbd0 primary --do-what-I-say

The DRBD service is now up and running, and the primary-slave relation should be established as well. Use rcdrbd status to test if it is in reality all working well.

Listing 2: Use rcdrbd status to test if the DRBD device is working well.

san1:/etc # rcdrbd status

drbd driver loaded OK; device status:
version: 0.7.22 (api:79/proto:74)
SVN Revision: 2572 build by lmb@dale, 2006-10-25 18:17:21
0: cs:SyncSource st:Primary/Secondary ld:Consistent
ns:2342668 nr:0 dw:0 dr:2350748 al:0 bm:12926 lo:0 pe:41 ua:2020 ap:0
[>...................] sync'ed: 2.3% (99984/102272)M
finish: 2:24:36 speed: 11,772 (11,368) K/sec
1: cs:Unconfigured

As you can see from the listing above, the DRBD device is set up but still synchronizing, which takes some time after you've created the device.

In this procedure, you have learned how to set up a DRBD device on a SAN network. That's a nice start if you want to create a mission-critical open source SAN, but it's not good enough. As your servers are configured right now, if san1 goes down, you need to issue the drbdsetup primary command manually on the second server, which you don't want to do. That is where Heartbeat cluster comes in. And in the next part of this series, we'll discuss how to set up this cluster.

You now have your DRBD up and running. In part three of this series on open source SANs, you'll learn how to set up a Heartbeat high availability cluster and put the DRBD resources in the cluster, which ensures that, if the current primary server fails, the primary DRBD server will automatically fail over.

ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux. Vugt is also a technical consultant for high-availability clustering and performance optimization and an expert on SUSE Linux Enterprise Desktop 10 (SLED 10) administration.

This was first published in July 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.