November 22, 2008
I spent a couple of days this week at the Supercomputing 08 conference in Austin, Texas, and I was excited to write this blog about how cloud computing might be relevant for high-performance computing. Then I read this article on HPCwire, written by Thomas Sterling and Dylan Stark of LSU, which does the subject just a tad more justice than I can do.
I still want to make a few extra points, though. The first is that I saw a presentation by John Storm, an executive director within Morgan Stanley’s Institutional Securities division, who talked about how financial services firms are using HPC. Two disparate comments by Storm caught my attention: (1) that Monte Carlo simulations comprise the majority (up to 70 percent) of HPC computations; and (2) that the law of diminishing returns rears its ugly head most notably around power bills. It’s not unheard of for banks to use Amazon EC2 for Monte Carlo sims, so I wonder how many, after doing the energy math, actually are. How many are seriously considering it?
Also on the power front, a Wednesday panel discussed the power struggles surrounding high-end supercomputers and large enterprise datacenters. There are about a dozen computers on the Top500 list using between 1.2 and 7 megawatts of power (the peak belonging to Cray’s new Jaguar supercomputer), and commercial datacenters tend to use between 36 and 100 megawatts (and now consume up to 200,000 square feet of space). I’m not suggesting the types of apps running on Jaguar would work in a cloud environment, but, certainly, small-time or infrequent HPC users could experience significant capital and operational savings by utilizing an HPC-capable cloud like EC2 instead of buying their own system. Commercial users might note the cloud’s increasing readiness for them, too.
God knows there are plenty of HPC solutions already leveraging EC2. Univa UD’s UniCluster software can run on EC2, and a company called CycleComputing builds on-demand Condor pools for its customers with its CycleCloud service. Wolfram Research has enabled its Mathematica product to run on EC2, as well. Oh, and Amazon itself made life easier a few months back with its High-CPU instances. According to Amazon:
Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.
EC2 Compute Unit (ECU) -- One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor.
More on HPC and cloud computing, in general, can be found here.
In case you missed it …
Be sure to check out these announcements, which could have big impacts:
Posted by Derrick Harris - November 22, 2008 @ 10:40 AM, Pacific Standard Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.