July 13, 2010
Many thought that for a certain class of HPC applications, the public cloud would never be a suitable option due to the predictable concerns about performance in particular. And they probably did not count on ever living to see a virtual cluster take a virtual spot in the TOP500.
Many also thought they would never see the day, or at least not this soon, that Amazon would tailor its cloud offerings or cloud business model to fit the needs of a bulk of HPC applications that are dependent on tight coupling for parallel processes or applications that require top-tier networking. The announcement this morning that Amazon Web Services is offering Cluster Compute Instances for EC2 specifically for the needs of HPC users might just be that long-awaited game-changer when it comes to the viability of scientific and large-scale computing in the public cloud, although it is still far too early to tell how important this new offering from Amazon will be in the long run.
Amazon's Cluster Compute Instances look very much like EC2 instances on the interface end but have been specifically engineered to pack more CPU punch. Furthermore, this new arrangement allows for the capability of combining these instances into virtual clusters, which will allow for the much-needed network performance that is essential for many applications thought to be un-cloudworthy to run as they were intended. One of the main points that Amazon is pushing is that this is certainly far easier to work with than an in-house cluster and it is supposedly not much more difficult than it would be with standard EC2 instances.
One of the most striking sub-announcements is that Amazon, in order to identify if what they had created was tried-and-true HPC was to benchmark it against Linpack to measure performance. Amazon stated that they ran the benchmark on 880 of their Cluster Compute instances (7,040 cores) and "measured the overall performance at 41.82 teraflops using Intel's MPI and MKL libraries along with their compiler suite." These metrics slapped them in at what would amount to the 146 position on the TOP500. Even those who are wary of the cloud for HPC have to take a moment to chew on that figure.
So then, what would you do with a 64-bit platform with 23 GB of memory, 1,690 GB of instance storage with a 10 gigabit connection if the price was right? According to Amazon, each Cluster Compute Instance consists of a pair of quad-core Intel Nehalem X5570 processors with a total of 33.5 EC2 compute units. Amazon states that the price is currently set at $1.60 per single instance with discount packages available as well, including a $4,300 annual package and a $6,600 package for a three-year period, and with such agreements, the price drops 56 cents per hour. Pricing is rarely straightforward when it comes to the cloud despite all outward appearances.
While some are asking questions about why not InfiniBand since it is very often a standard on HPC clusters, the performance benchmarking is rather impressive. As of now, the default use limit for these new types of instances is 8 (64 cores) but of course, if you're willing to pay for more, you can certainly have it -- that's the Amazon model, after all.
A consistent point of contention HPC users have historically had with a public cloud like Amazon's EC2 is that the performance was often subpar when compared to a traditional HPC cluster. The problem, of course, is that having such a cluster is enormously expensive, not only in terms of initial cost, but in maintenance, power consumption, and management. As of this announcement, with the Linpack results to add credence to the claims, it's difficult to find fault with what Amazon has managed as it does represent the attractive aspects of the cloud without that performance hitch that has been an Achilles heel for researchers considering the cloud for certain applications.
While it will be interesting to see if the claim from Werner Vogels, CTO at Amazon, that users can "now achieve the same high compute and networking performance provided by custom-built infrastructure while benefitting from the elasticity, flexibility and cost advantages of Amazon EC2" will be seen though -- at this point, it does seem as though this has been quietly tested during an extensive private beta that included Lawrence Berkeley National Lab, among others.
When compared to vanilla EC2 instances, "In our series of comprehensive benchmark tests, we found our HPC applications ran 8.5 times faster on Cluster Compute Instances for EC2 than the previous EC2 instance types," said Keith Jackson who was one of the computer scientists at Lawrence Berkeley National Laboratory who took part in the private beta.
Until today, it seemed that cloud computing for a wide range of HPC applications was simply going to be off limits and it also seemed that Amazon was content to lose those users and direct its focus on its real bread and butter -- the SMEs who were among the first to jump onto the cloud bandwagon and who seemed to have the most to gain in terms of avoidance of a large expenditure outright, which would lead to finite resources.
This is an important day for HPC and cloud as a concept, as it is one step closer to become a broader reality. This also means that other vendors, including Microsoft, Google (because we all know it's on its way here) and others will need to step up their game to appeal to a particular computing market segment.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.