June 25, 2007
Since acquiring the base technology from Virginia Tech in the summer of 2004, Evergrid has been working toward the release of its initial software solutions, and when this day finally came on June 11, the IT world was reintroduced to its old friend the resource manager -- but this time with some seriously unique capabilities.
Designed to function as a “datacenter resource manager,” Evergrid’s Cluster Availability Management Suite (CAMS) currently consists of Availability Services (AvS) and Resource Manager (RM-Batch), both of which are designed -- as the name indicates -- for environments running critical batch applications. Sitting as an abstraction layer between the operating system and the application library, the Evergrid software plays the role of a vertically integrated management system, able to handle a wide variety of functions from powering machines on and off as necessary to workload scheduling and load balancing.
The reason for all of this functionality, according to Evergrid CEO David Anderson, is to make possible the company’s vision of making today’s increasingly complex datacenters as easy-to-manage as possible by providing customers with policy-based, autonomic management, guaranteed service-level agreements (SLA) and stateful failover capabilities. The first two parts of the vision fall under the realm of resource manager, currently only RM-Batch, which features closed-loop control (including preemptive and priority scheduling) to ensure that jobs always have available to them the mandated resources. In addition, jobs can be migrated within the resource pool without losing the progress that already has been made. In the near future, Evergrid will release its Datacenter Resource Manager, which steps up the policy-based, autonomic management and SLA features while targeting virtualized enterprise datacenters running transactional and online applications.
Apart from the uniquely policy-based and fine-grained control, though, what really allows CAMS to separate from the crowd is the Evergrid AvS, which also plays a role in the aforementioned migration capability. Essentially, AvS captures the collective state, including I/O and message-passing data, from parallel applications using its checkpoint/resume feature. These captures can happen periodically on user-set intervals so that failover never is an issue. Should a job -- especially a long-running, compute-intensive batch job -- be interrupted due to hardware or software failure or a high-priority job that requires the same resources, the original job can either wait until the high-priority job finishes and restart from the last checkpoint or migrate to other available resources.
Anderson believes this capability is especially important in the HPC space, where long-running applications -- and, as a result, businesses -- often suffer due to jobs not running to completion and needing to be restarted from the top. “If … [you are] running a chip simulation for a new product release, and that chip simulation runs for two months but fails six weeks into that, you’ve just slipped the delivery of your product by six months,” hypothesized Anderson. “I’m not going to calculate the return in that case, but, trust me, the return’s a heck of a lot higher than the cost of the machines.”
Interestingly, perhaps, although Evergrid believes its resource manager has a deeper capability set than any potential competitors, the company already is integrating its AvS with these same competitors’ solutions. Platform LSF is the first for which AvS has been certified, but Anderson says the software can work with others, too, such as Altair’s PBS Professional. These companies, Anderson said, have a lot of batch customers clamoring for this feature, and “I don’t expect their customers to change to a different resource management system, so it makes complete sense for us to integrate with their product and allow the customer to continue using it the way they use it today, but with some additional capabilities.”
Customers of the presently available batch products -- which, not surprisingly, are being marketed to companies (i.e., those in the manufacturing, financial services, pharmaceutical, and oil and gas markets) running computationally intensive applications in dedicated HPC environments -- can expect to see between 10 and 40 percent resource utilization when utilizing the preemptive scheduling and checkpoint/resume capabilities, as well as a guarantee that jobs will complete within the expected time. In fact, one current customer, which Anderson can describe only as a “top-five Wall Street firm,” was “wowed” by the 85 percent TCO reduction it saw since adopting the Evergrid suite of products. Citing some impressive statistics, Anderson said the firm saw a 4X improvement in server utilization while reducing server administration costs by 50 percent and cutting power costs by 65 percent thanks to its Evergrid platform automatically powering off unused servers. The shutting down of these servers is facilitated by the software’s migration capabilities, which, aside from allowing jobs to continue running, can push jobs to machines with available resources, thus maximizing utilization on some servers while allowing others to rest.
CAMS also can be utilized across the globe thanks to its global meta-scheduler. According to Anderson, the company already has one customer planning to implement the software across six datacenters.
However, while Evergrid’s current and beta customers (including the aforementioned Wall Street firm, the Supercomputing Center for Education & Research (OSCER) at the University of Oklahoma and British Telecom) tend to have pretty large HPC environments, Anderson says the company’s products will be ideal for a much broader class of customer, especially with the release of the Datacenter Resource Manager, which he foresees as being attractive to any organization running a datacenter of 16-plus servers. While the solution can be effective while running over a collection of as few as eight nodes, Anderson says the sweet spot is into the hundreds of nodes, in part because this is where traditional, non-policy-driven VM environments begin to fail. “In the long term,” he said, “we’re going after the datacenter.”
In the opinion of Forrester Research vice president Jean-Pierre Garbani, though, while Evergrid’s software set is “very interesting,” he has seen this type of functionality attempted via various approaches in the past. In fact, he described Evergrid’s platform as “the first real approach to what was thought of [a few] years ago by IBM and HP as autonomic computing.” While the idea of dynamically reconfiguring resources to the needs of applications is nothing new, he says that Evergrid’s use of virtualization to meet this end is “a very clever way to solve that issue.”
As “clever” as the approach might be, what is more important, as far as Garbani is concerned, is Evergrid’s focus on the workload. “What was unrealistic in autonomic computing or utility computing,” he said, “ … is that there was absolutely no decision support,” so while users could allocate resources dynamically, there was nothing telling them when or why they should re-allocate them. Continued Garbani: “They gave you the means to do something without giving you the means to reach the decision.” With Evergrid, he says, users are given the information necessary to take action along this front. He also compares Evergrid’s solution to a distributed version of a mainframe when it comes to its aims in terms of operability.
Assuming today’s (and tomorrow’s) organizations would like to see mainframe operability from distributed systems, this should be good news for companies like Evergrid, which Garbani believes can succeed if they are able to educate the market and ride the momentum created by more and more vendors getting into the space. “Innovation by itself requires many friends in order to get adopted,” he said, and because there are some similar solutions in the market and others potentially are on the way (he believes companies like XenSource and VMware might be looking at this kind of technology as a next step), Evergrid could soon find itself part of a family of products that collectively would educate the market. “For the time being,” he said, “what I expect from Evergrid is they’re going to get a certain number of customers -- the leading edge, the people who see the advantages now -- but the global adoption is maybe a couple of years in the future.” This type of mindshare, he noted, could lead to acquisitions from big-time vendors like IBM, HP and Sun, who would then proceed to push it into datacenters at a significantly faster pace.
In the end, Garbani added, Evergrid’s approach to resource management is a “terrific idea” very much needed by the current market, where people are just jumping on the virtualization bandwagon without any idea how to use it.
And while Garbani’s forecast is hardly a doomsday scenario for Evergrid, CEO Anderson sees the future in an optimistic light that is withheld for company representatives contemplating a product’s success. “We think that, in the long run, customers are all going to want to consolidate their server environments," he said, “and if they can then dynamically allocate servers to workloads in a way that manages utilization and availability seamlessly, everybody’s going to want to do it.”
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.