As we established in part one of this series on open storage area networks (SANs), building an open source SAN...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
provides a cost-effective alternative for companies with a tight budget. Now that we've established the merits and some of the important considerations in creating open source SANs, we'll explain how to set up the Distributed Replicated Block Device (DRBD) service, which allows for replicated storage in a SAN.
Once your server is up and running based on the installation guidelines we discussed in part one, it's time to take the first true step in the SAN configuration and set up the DRBD device. First, ensure that you have a storage device available on each of the servers involved in setting up your SAN. In this article, I'm working on the /dev/system/DRBD device. This name can be different in your particular configuration. It is also a good idea to use Ron Terry's add_drbd_resource.sh script, which you can download from the Internet. This script automatically creates the configuration files for you, which prevents typing errors. The procedure below assumes that you have two servers, named san1 and san2. In configuring your own, you can replace these names with the names of your servers, of course.
- Download the add_drbd_resource.sh script from Shell Scripts page. Put it in user root's home directory and make it executable using chmod +x add_drbd_resource.sh.
- Run the add_drbd_resource.sh script from san1 and provide the following values:
- Resource name: r0
- Number of the next DRBD device: 0
- Name of the block device to be mirrored: /dev/system/DRBD
- Name of the first server: san1
- IP address of the first server: (your server's IP address)
- Name of the second server: san2
- IP address of the second server: (your server's IP address)
- TCP Port number: 7788
- Syncer rate: This refers to the speed used when synchronizing changes between the DRBD devices. It can be up to as fast as the LAN connection between your servers. So use 100 Mb on a 100 MbLAN, or 1,000 Mb on a Gb LAN. Want to make sure that DRBD doesn't eat all available bandwidth? Set it somewhat lower than these amounts.
- Synchronization protocol: use protocol C at all times because it gives you the optimal combination of speed and reliability.
After entering all required parameters, write the configuration to the /etc/drbd.conf file. This results in a file with contents much like that of the file from Listing 1 below:
Listing 1: Example of the drbd.conf file.
To see the text of Listing 1, click here
Now that the configuration is in place on san1, you can load the DRBD kernel module using modprobe drbd. If this gives you a "Module not found" error, make sure that you have installed the appropriate module for the kernel that you use. (To find out which kernel that is, use uname -r). Now that the kernel module is loaded, it is time to run a first test. To do so, you use the
drbdadm command with the -d (dry run) option and the adjust argument. This will ensure that the DRBD device is synchronized with the configuration file that you've just created. To do this, now run the following command:
drbdadm -d adjust drbd0
This command yields some results. If they look like the output below, you are ready to copy the configuration to the second server.
drbdsetup /dev/drbd0 disk /dev/system/DRBD internal -1 --on-io-error=detach
drbdsetup /dev/drbd0 syncer --rate=100m --group=1 --al-extents=257
drbdsetup /dev/drbd0 net 192.168.1.230:7788 192.168.1.240:7788 C
Now that the DRBD device is up and running, you can copy it to the other server:
scp /etc/drbd.conf san2:/etc/drbd.conf
Next, you need to load the DRBD module and test the DRBD device on the second node as well. Enter the following two commands from san2:
drbdadm -d adjust drbd0
If this all looks good, you can now go on and start the DRBD device. First, add the service to your current run levels, using insserv drbd on both servers. Next, use rcdrbd start on both servers to start the service. This enables the DRBD device, but doesn't put it in a synchronized state. Next, enter the following command on one of the servers to tell this server that it should act as the primary (which is the active) server:
drbdsetup /dev/drbd0 primary --do-what-I-say
The DRBD service is now up and running, and the primary-slave relation should be established as well. Use rcdrbd status to test if it is in reality all working well.
Listing 2: Use rcdrbd status to test if the DRBD device is working well.
san1:/etc # rcdrbd status
drbd driver loaded OK; device status:
version: 0.7.22 (api:79/proto:74)
SVN Revision: 2572 build by lmb@dale, 2006-10-25 18:17:21
0: cs:SyncSource st:Primary/Secondary ld:Consistentns:2342668 nr:0 dw:0 dr:2350748 al:0 bm:12926 lo:0 pe:41 ua:2020 ap:0
[>...................] sync'ed: 2.3% (99984/102272)M
finish: 2:24:36 speed: 11,772 (11,368) K/sec
As you can see from the listing above, the DRBD device is set up but still synchronizing, which takes some time after you've created the device.
In this procedure, you have learned how to set up a DRBD device on a SAN network. That's a nice start if you want to create a mission-critical open source SAN, but it's not good enough. As your servers are configured right now, if san1 goes down, you need to issue the
drbdsetup primary command manually on the second server, which you don't want to do. That is where Heartbeat cluster comes in. And in the next part of this series, we'll discuss how to set up this cluster.
You now have your DRBD up and running. In part three of this series on open source SANs, you'll learn how to set up a Heartbeat high availability cluster and put the DRBD resources in the cluster, which ensures that, if the current primary server fails, the primary DRBD server will automatically fail over.
ABOUT THE AUTHOR: Sander van Vugt is an author and independent technical trainer, specializing in Linux. Vugt is also a technical consultant for high-availability clustering and performance optimization and an expert on SUSE Linux Enterprise Desktop 10 (SLED 10) administration.