Been using a crystal ball predict your company's future storage capacity needs? Well, that's cheaper than buying...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
big proprietary software to do the job, according to Todd Sanders, chief network architect for Centrepetal Solution Strategies LLP, a systems integration and systems security firm in Bowie, Md. But, he said, building a storage area network (SAN) on Linux would be more effective than the crystal ball and a lot less expensive that the proprietary route. In this interview, he described how he and his team built a SAN on Linux for several businesses and military organizations.
Why do businesses balk at using SAN technologies, despite the fact that they can alleviate storage capacity shortages?
Todd Sanders: The No. 1 factor holding back SAN implementations is cost. For example, we worked with EMC on a project that costs about $20 million to prepare. The cost included spatial requirements, cooling, weight requirements, power requirements, etc.
A Linux solution would allow us to minimize the cost impact to the client and place a pilot solution that was about one-tenth of the cost.
Besides lowering costs, what was the main objective of this project to build a SAN on Linux?
Sanders: The main objective of the project was to provide a turnkey solution to the client. The solution would provide redundancy, speed, reliability and centralized management of their hard disks.
What storage system does the SAN replace?
Sanders: Disk arrays that were put in a fiber hub.
Why did you use Linux 2.4 instead of 2.6?
Sanders:The 2.4 kernel is a stripped down version of the Linux kernel. This was done to reduce the overhead experienced by using a GUI. The IPStor application [from FalconStor of Melville, N.Y.] could then move more resources toward managing disk operations.
What products/components did you consider? What did you choose and why?
Sanders:We considered all the major vendors: EMC, HP, IBM, etc., and network-attached storage instead of SANs.
Now, there are disk arrays/SANS that allow the user to configure the disks in a Raid 0,1,5 environment, but they don't allow the user to dynamically add disks to a product environment. We determined that IPStor and Tigicorp [from TiGi Corp. of Vienna, Va.] data managers and their solid-state disk approach would allow us to do just that.
What kind of problems is the SAN designed to solve?
Sanders: A SAN would resolve capacity planning by adding disks on the fly; point-in-time restores of user data; data recovery of a snapshot instead of on a product server that could take away resources from a production server; and enhanced processing capabilities on the server level.
Data management would be done by another server using high-speed connections --fiber channel, Gig-E, InfiniBand -- and redundancy of disks. In the area of redundancy, we were able to remove multiple disks during our testing phases and found that the system continued to work.
The system was set up in an R + 1, which allowed us to mirror the disks if there was a failure. Unlike Raid 5, we did not experience a performance hit because all the I/O disk read and writes were handled by the data managers. Finally, the administrator can centrally manage disk operations from a console in which they could load on a laptop or workstation.
What is the advantage of enabling a user to create point-in-time snapshots and backups on non-production information?
Sanders: It allows the administrative staff to perform backups and restores on non-production environments, thus reducing the amount of overhead that it takes from day-to-day operations. In addition, point-in-time and snapshots allow the user to retrieve a file for restoration reasons. Again, the restore process is done in seconds instead of hours. The user would go to a mounted file system, click on the file, then drag and drop the file to the active partition. With tape, the admin would have to go back to tape and retrieve that tape. The tape software has to find the right index to go to that point in the tape where the file is located, then restore the file back to disk.
Why is encrypted storage necessary? What other security measures might work just as well?
Sanders: Encrypted storage ensures that the customer's data is protected. We can encrypt the data in a number of ways. We could use Microsoft's disk encryption tool to encrypt the entire volume, or we could have the disk manager encrypt the disk by placing the correct module on the server (aes.o). In addition, this disk solution runs on SuSE Linux, Novell, Microsoft, Solaris and all variants of Unix, so the disks could be encrypted by the native software that runs on the various servers.
Why use solid-state storage? How does it compare to other options?
Sanders: Solid-state storage allows for read and writes run in memory (8 GB or the OEM limit), an onboard processor, battery supply for power failures and an actual disk to recover if data fails. The concept is based on RAM disks. The disk comes online, the contents of the disk are stored in memory and the speed of the application accessing the software is increased by 1,000%. In some aspects of speed tests, the increase has been in excess of 2,500%. If there is a power failure, the onboard processor and battery slowly transfer the data from memory to the onboard disk. Once the power is back online, the disks move the data back into memory.
You also used a product called AirWave wireless network management software [from AirWave Wireless Inc. in San Mateo, Calif.] in this SAN system. What does it contribute?
Sanders: The Airwave helped me to resolve problems from a wireless standpoint by allowing me to identify wireless rogue, or unauthorized, access points.
What glitches did you run into during this project?
Sanders: The glitches that we ran into from a SAN standpoint came from firmware levels. There is a certain firmware level from the HBA [host bus adapter] that must be consistent with the OEM's recommended levels.
How have the deployments gone?
Sanders: As long as the client purchases the recommended hardware, we have no problems. There can be small configuration errors caused by the user, and sometimes we've needed to compile the kernel to update any additional devices, but these issues were quickly resolved.
In terms of cost, performance and ROI, what were the benefits of doing storage with a SAN on Linux?
Sanders: We could use it to manage an EMC solution and dynamically add more disks to the system. By placing the data managers in front of the existing EMC solution, the administrators were able to manage the disk solution without having to pay high-priced EMC consultants.
Moreover, we used disks -- different disk sizes -- from different vendors to create huge SAN solutions for the customer. In addition, the solution utilized existing disks without tying us to a particular disk type or manufacturer. This proved to be a huge cost savings because we are not forced to buy their disk for this solution.
Furthermore, if we placed heavy read/write and disk I/O intensive applications on the solid-state disks, we improved performance by 800% to 1,500% on the Microsoft Exchange Server. Even with all this, we are able to manage disk space, add disks to production environments and dynamically allocate disk space on the fly to numerous applications, including Oracle, Microsoft SQL and Exchange, and Citrix.
What's next for this SAN?
Sanders: I think the future will be to add numerous security levels to the Linux 2.4 kernel by putting in place preconfigured iptables, enhanced hardware encryption and Web-based monitoring.