Centralized access with iSCSI wraps it up: Open source SANs, part 4

Our expert explains how to configure an internet Small Computer System Interface (iSCSI) on an open source storage area network.

In this tip, you complete open source storage area network (SAN) configuration with a final element: centralized access. An iSCSI target service manages storage and can provide this access. We will configure one together.

If you've read the three previous parts of this tip, you should now have two servers running and a Distributed Replicated Block Device (DRBD) available between them. The iSCSI target service will draw the two servers and DRBD together to create a fully functional SAN. Ready?

Configuring iSCSI
In general, there are two types of SANs: iSCSI-based and Fibre Channel-based. A Fibre Channel SAN requires costly hardware and therefore is not an ideal option for building an affordable open source SAN. In this configuration, we'll use iSCSI. In a typical iSCSI configuration, the iSCSI target is the service provided by a SAN. This iSCSI target provides iSCSI access to a shared device. In this case, the shared device will be the drbd0 device that you created in Setting up a DRDB in an open source SAN. We will make the DRBD highly available once we have configured the iSCSI.

Configuring iSCSI is an easy, two-step procedure. First, you'll use YaST to set up the iSCSI service. After that, you'll make the iSCSI target service highly available.

  • First, from one of the servers in your cluster, start YaST by entering the yast2 command. If you prefer clicking, you could also select it from the Start Menu of the server. As I recall, a typical Linux server doesn't run a graphical user interface (GUI).
  • So, start your SSH session and manually enter the yast2 command. Still with me?
  • Select Miscellaneous > iSCSI Target. This may prompt you to install software first. If the prompt comes up, install the software. Once done, you'll see an interface as in Figure 1.

Figure 1: The YaST iSCSI Target Overview window offers an easy interface to configure iSCSI.

  • On the Service tab, make sure that the Manual option is selected. This is important because the cluster will manage the iSCSI target startup, and you must make sure that it will not start automatically. Next, select the Targets tab (see Figure 2). On it you'll see an example iSCSI target. It is not functional, so select it and then click Delete to remove it.

Figure 2: From the Targets tab, remove the example iSCSI target.

  • Now click Add to add a new iSCSI target. The target name is selected automatically, which is fine. The server uses a unique, automatically generated ID for the identifier. Provide a more readable ID here; pick any name you like. Next, click the Add button.

Note: The logical unit number (LUN) that you enter here must be unique for the target.

  • Make sure that the option Type=fileio is selected and browse to the DRBD device -- /dev/drbd0 -- to configure that as the iSCSI target storage device. No other information is required here, so click OK to close this window and then Next to proceed.

Figure 3: Configure the iSCSI target to share the DRBD device.

  • You do not need to change any settings in the authentication settings window. Click Next to proceed. When you are back in the Overview window, click Finish to write the configuration to your server.

After creating the iSCSI target configuration on the first server, you need to make sure that the second server has the same configuration. You can do so by copying the /etc/ietd.conf file to the other server. Assuming that san2 is the other server, use the following to copy the configuration file from a san1 console:

scp /etc/ietd.conf root@san2:/etc/ietd.conf

At this point the iSCSI target is usable but will not start automatically. You have to add it to the cluster for it to start now. To do so, you need two components: a unique IP address on which the iSCSI target can be used and a resource that will start the iSCSI target service. The following procedure describes how to add these components:

  • Start the hb_gui utility from one of the servers.
  • Select the iSCSItargetGroup you created earlier, right-click on it and select the item type Native.
  • Next, enter the Resource ID iSCSI_IP_Address and from the Type list. Select ipaddr2.
  • In the parameters section of the interface, you'll see that the IP parameter has been added automatically. Click in the Value column and enter a unique IP address here.

Figure 4: Enter a unique IP address for the iSCSI target.

  • Now click the Add Parameter button and select the parameter cidr_netmask. Enter the netmask your IP address has to use. Enter this in Classless Inter-Domain Routing (CIDR) format rather than in dotted decimal format. So, 24 -- not 255.255.255.0. Click OK to add it.

In the event that your server has multiple network boards, you should also make sure that the IP address binds to the right network board. To accomplish this, click Add Parameter once more and select nic from the drop-down list at the name option. Now enter the name of the interface to which you want to assign the IP address and click Add to add the resource.

In addition to the unique IP address the target will use, you need to create a resource for the iSCSI service. Right-click on the iSCSItargetGroup and select Add New Item from the menu. When asked what kind of resource you want to add, select Native Resource. Enter iSCSI_target as the name of the resource and select iscitarget (lsb) from the list of available resource types. Click Add to add the iSCSI target resource.

Now everything is in place to start the three associated resources. Right-click the iSCSI target group and select Start from the menu. This will start the resources. Since they are in the same resource group, they will not all start on the same node.

Congratulations. You now have your open source SAN and essential resources ready to go!

Connecting to the SAN
Now that the SAN is in place, all you need to do is connect to it. To do that, initiate an iSCSI session from the server that you want to connect. You can do so using software called iSCSI initiator. It is available for almost all server operating systems.

Alternatively, you can use an iSCSI host bus adapter (HBA). For instance, qlogic HBAs are quite popular. These contain an integrated iSCSI server that can connect to the SAN. It's a good idea to use a hardware HBA because it offers the best performance. This is because the iSCSI process has its own dedicated chip and doesn't have to compete with other system resources for CPU cycles and memory.

Configuring the iSCSI connection to the SAN is beyond the scope of this article. To learn more about iSCSI and SANs, take a look at this guide.

A final precaution
You should know what to expect -- good and bad -- from your open source SAN and the associated resources covered in this series. Of course, you can expect availability. But what if the active SAN server goes down? In this case, the iSCSI initiator will lose its session. That is unavoidable with iSCSI. It is also why we ensure that if the active SAN node fails, the other node will take over. The servers connected to the SAN, however, will need to re-establish their connection in order to get into contact again.

You can achieve high availability, though not uninterrupted availability.

ABOUT THE AUTHOR:Sander van Vugt is an author and independent technical trainer, specializing in Linux since 1994. Vugt is also a technical consultant for high-availability clustering and performance optimization, as well as an expert on SUSE Linux Enterprise Desktop 10 administration. Now that you're finished with this series on SANs, Sander has also done a tip on Monitoring Network Performance.

This was first published in August 2008

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataCenter

SearchServerVirtualization

SearchCloudComputing

SearchEnterpriseDesktop

Close