July 27, 2010
When the news about Amazon’s new instance type for EC2 called Cluster Compute Instances (CCI) emerged, the official statement indicated the cooperation of a few companies, including Cycle Computing and RightScale, although the details of their involvement were not immediately clear. This morning one of the cloud services vendors noted in the announcement, Adaptive Computing, issued a release detailing some of its contributions to Amazon’s HPC instance type, along with some details about their process of testing the performance of the public cloud.
There has been no word from Amazon about how it selected its partners as much as a year in advance of the official CCI announcement, but Adaptive does provide a way to maximize public and private cloud utilization, which would certainly explain Amazon’s interest in the relatively small company. While they are small, they have been involved in some of the largest-scale HPC and HPC cloud deployments. Adaptive Computing specifically targets the HPC market with its array of policy-driven automation offerings based on Moab technology. This technology ensures that resources in the cloud can be utilized as efficiently as possible and according to the policies that govern resource usage and might help make the public cloud more "intelligent".
President of Adaptive Computing, Michael Jackson, remarked on the fact that until Cluster Compute Instances became available many of their customers were disappointed with the performance of HPC capabilities in the public cloud but that this offering does offer some more viable alternatives for these customers. Jackson commented that following their tests, they saw the possibility for a whole new class of HPC users to enter into the space, which is quite good news for a company that is directly aligned with this market segment—a segment that is growing very quickly, according to research group IDC, among others.
Testing the Intelligent HPC Cloud
One of the more compelling aspects of the Adaptive announcement today were the statements about Adaptive’s testing of Amazon’s new HPC cloud environment. Since this is a new and relatively untested (widely and publicly) instance type, hearing more about the performance levels, even if it was coming from the perspective of a firm whose best interest it is to extol the virtue of this public cloud, was at least informative.
Following what it called “extensive testing” Adaptive Computing declared that the new Amazon Cluster Compute Instances as “robust, reliable and capable of delivering the compute power, bandwidth and low latency required by HPC class applications. Adaptive’s VP of marketing, Peter ffoulkes, described some of the company’s perceptions following the testing they conducted to determine the performance of Amazon’s new HPC offering:
Amazon’s press release referenced one of our customers NERSC, stating that “we found our HPC applications ran 8.5 times faster on Cluster Compute Instances for Amazon EC2 than the previous EC2 instance types.” That is quite a reasonable endorsement, although they didn’t make any comparisons (that I have seen)to their own internal HPC systems.
The Intel Nehalem based systems the Amazon CCI capability is based on have been a platform of choice for many HPC facilities implementing x86 HPC systems. From the network perspective, not all HPC applications require low latency, but the “10gigE” network is no slouch. It is less expensive to implement than Infiniband and several informed sources (one being Jeff Birnbaum of Bank of America, talking about latency at the High Perormance Linux for Financial Markets event last April) opined that although 10GbE is not as good as Infiniband it’s close.
In terms of their own contribution to the functionality of the new instance type, Adaptive stated that “Moab technology ensures that cloud resources can be utilized with up to ninety-nine percent efficiency and, when combined with Amazon’s EC2 Cluster Compute Instance environment, creates a dynamic and intelligent environment that offers excellent value and return on investment in HPC cloud resources.”
A Seachange for HPC or a Small Wave?
According to Peter ffoulkes, the introduction of Amazon’s Cluster Compute Instances represents another step toward greater maturation of cloud offerings. He also stated that “True HPC class cloud services will not only help ‘democratize’ the HPC market by lowering the barrier to entry and thus expanding the market, but will also offer agility in providing the ability to rapidly implement HPC services for existing “traditional” HPC facilities.”
Adaptive’s ffoulkes provided a realistic, grounded response to the idea that Cluster Compute Instances represent a sea change in the HPC and cloud space, stating that although there will be new customers entering the HPC market via Amazon’s new offering:
We don’t expect HPC cloud services to replace traditional HPC facilities any more than rental cars or car sharing schemes have replaced personal car ownership. They simply add more options to the market which is a good thing. We are very strong believers in the private cloud computing paradigm (cloud services delivered on systems owned by the host organization), and that private HPC clouds will become more prevalent over time as awareness that cloud computing approaches can be used to deliver HPC services on the most advanced architectures with no loss of capability.
Given that they participated in some of the early testing behind Amazon’s Cluster Compute Instances, I asked ffoulkes whether or not he felt that their announcement marked a seachange for HPC as it might now be more accessible via the cloud.
Amazon’s Cluster Compute Instances are not the first HPC Cloud offering available, Rocky Mountain SuperComputing Centers was specifically created to offer “Supercomputing as a Service” to businesses and other organizations in Montana, but Amazon clearly has much more significant visibility. To that extent I would say it absolutely marks a sea-change in the cloud market, with public cloud vendors recognizing that a “one size fits all” approach might have been acceptable in the early stages of the cloud computing market place, but as the market is beginning to mature a wider range of capabilities will be required and offered by multiple vendors, including but not limited to “HPC as a Service”.
Adaptive’s ffoulkes was likely harboring a little Amazon-related secret back when he gave this interview at ISC in Hamburg this past June. Even still, his statements here carry over to the announcement from Amazon and the future and viability of the public cloud for HPC.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.