July 28, 2008
Backup might not be everyone’s favorite topic, but it is the one IT operation that, if screwed up, can mean a stay at that place where an orange jumpsuit is part of the welcome package. It has to be done -- and done right. And with the amount of data that has to be backed up going in only one direction, it’s a good idea to have a backup system that can grow along. Wouldn’t it be nice if it were also budget-sensitive, extremely reliable, fast, accessible, and easy to implement?
That’s the promise behind ExaGrid’s disk-based backup system, which the company recently fortified with features aimed at multi-location datacenters.
Essentially, ExaGrid builds modular storage servers around arrays of SATA disks that plug into a grid architecture. As the company’s name suggests, scalability always has been part of the design. An ExaGrid server (in flavors from 1TB to 5TB) can be plugged into the grid as needed, and “through our software, it virtualizes into the existing system,” says Bill Andrews, ExaGrid president and CEO. “When you add one of our boxes, you’re not just adding more disk, you’re adding storage servers. You’re adding more processing power and memory, which you need in order to scale up and handle more data. And all those resources virtualize into one large system. To the backup server, it’s just more capacity, and without any disruption.”
ExaGrid does a couple other things that make its data-handling approach unique and that yield direct benefits to users. The company’s process uses byte-level de-duplication to reduce significantly the amount of data that has to be stored. This means the system can detect changes to a file at the byte level and, after backing up the original, saves only changed or new data. Instead of backing up Homer’s spreadsheet every day, day after day after day, the system saves only alterations. Studies show that most user files seldom change after a certain age. Avoiding unnecessary duplication can reduce the amount of storage required by 20:1, according to both ExaGrid and independent analysts. ExaGrid also compresses the data, further reducing the amount of platter needed for backup. For organizations sending data to remote sites, smaller backup files also means faster transmission across a WAN.
“Average compression is about 2:1, so a 1TB backup file would be stored as 500GB,” Andrews says. “Previous backup files are then kept as the byte-level changes only, which averages to about 2 percent of the data.”
“We’re trying to make backup not only better but faster,” Andrews says. “Customers tell us that we’ve reduced their backup time by at least 30 percent, some much higher. We let the backup run to disk and de-dupe afterward. Doing it on the fly slows down the backup process, and you can’t have your backups running in the morning when people come to work. We also keep the latest backup in its complete form, in case you need it quickly. You don’t have to put a zillion blocks together. Nearly all restores come from the latest version.” ExaGrid says its typical restore throughput is “up to 2.6 terabytes per hour.”
Clunk Goes the Tape
The company has been a proponent of disk-based backup since it started in 2002, when tape was still de facto but losing its glow. The advantages of disk over tape -- speed and reliability among them -- were becoming more and more apparent, but the rap against disk was price. SATA drives were initially about 40 times higher than tape. Today, that differential has come down to about 10, and ExaGrid’s technology equalizes the economics, Andrews says. “With compression, you use less space, bringing the price of disk down to about only five times more than tape. Add byte-level de-duplication and the price is about equivalent to tape.” The thought of never having to track down a tape, only to find out it is defective, ought to enter into the equation, too.
However, ExaGrid is not necessarily out to obliterate tape. “You can set up our system to copy your nightly or weekly backups through the backup server to tape,” Andrews says. “About half our customers make tape backup, and about half are disk-only. Although we see the trend moving away. Some of our customers are shutting off tape and adding another one of our systems.”
The ExaGrid Disk-based Backup System is a regular NAS unit made up of RAID-6 drives with a hot spare, Xeon dual-core processors, Gigabit Ethernet connections and management intelligence. Load balancing is built in. You can buy server “building blocks” in sizes of 1, 2, 3, 4, or 5TB, and add them to the grid in any size as demand grows. When models with faster processors or greater densities become available, they too can be added.
One of the key features of ExaGrid’s system is that it works seamlessly with the backup systems people are used to, including Symantec Backup Exec and NetBackup, CA Arcserve, EMC Networker and CommVault Galaxy. The company says no changes are required to your current setup. “You would continue to do your backup jobs as you do them today,” Andrews says. “ExaGrid sits behind your current backup server as a storage repository.”
ExaGrid’s target user has at least a terabyte of data and up to 60TB or so to contend with, Andrews says, and last week the company announced enhancements designed for organizations with datacenters in multiple locations. “We’re now giving customers multi-site backup capabilities that will let them cross-protect up to nine locations,” Andrews says. “Let’s say you have major offices in San Francisco, Dallas, New York, Chicago and Boston. You can now cross-protect across all of them. You can backup data locally and then send a copy to any of those other sites for backup. You can point the data at any other locations so that you can recover from a disaster. And because we’re only moving byte-level changes to off-site locations, you’re shipping only a fraction of the data across the WAN.” The company also added functions that allow for better monitoring of backup jobs.
Report from the Field
A fairly typical ExaGrid user, MemorialCare Medical Centers runs six hospitals in Los Angeles and Orange County, Calif. With patient and business data to protect, backups were taking up to 18 hours a day and much of the IT staff’s time, and consumed up to 300 tapes a week, says Jorge Cepeda, network engineer. With the ExaGrid system, backup time was reduced to 8 hours and the process was “painless,” Cepeda says. MemorialCare plans to install ExaGrid systems at all its hospitals in order to replicate data for disaster recovery.
In a report issued earlier this year, Enterprise Strategy Group said its ESG Lab tests “confirmed that ExaGrid backup-to-disk solutions combine the benefits of high-density SATA drives, post-process data de-duplication and scalable grid architecture to provide a cost-effective, energy-efficient alternative to tape.” According to the report’s lead author, ESG analyst Claude Bouffard, “Organizations struggling with the cost, complexity, and risk associated with tape backups would be wise to consider the bottom-line savings that can be achieved with ExaGrid: faster backups, quicker and more reliable restores, lower risk, lower expenses … and last, but not least, a greener solution with optimized power and cooling.”
The Taneja Group issued a statement as part of ExaGrid’s announcement last week in which senior analyst Jeff Boles said that in Taneja’s lab tests, “ExaGrid easily performed, scaled, and de-duplicated right out-of-the-box. ExaGrid’s scalability makes their performance claims even more compelling.” In a 2007 study, Taneja Group analysts recommended that organizations seeking a disk-based solution (“no longer optional,” they said) that delivers “ROI, reliability, flexibility, and demonstrable ease of use” should start with ExaGrid.
“I doubt we can ever make backup fun,” Andrews says. “but we will keep trying to make it better.”
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.