June 01, 2012
HPC resources are experiencing an increase in adoption outside of their usual market. While the technology has been commonplace in research-based institutions, large enterprises are beginning to tap its potential. A number of advancements are responsible for this change. Most notably, cloud services have increased accessibility to supercomputers. TechWorld discussed the trend earlier this week.
Supercomputers have benefitted from improvements in processor design, open source applications and storage. All of these changes help reduce the overall cost of a system, but owning a supercomputer still remains unfeasible for many enterprise organizations.
Providing a suitable facility to house a cluster and acquiring system hardware typically results in a large initial investment. Beyond the capital required to procure a supercomputer, businesses also have to factor in operational costs of running the system. The going rate for power is roughly $1 million per megawatt year.
Instead of purchasing their own system, an enterprise can outsource HPC resources to a third party Infrastructure–as-a-Service (IaaS) provider using an on demand model. This option is far more affordable by comparison and only requires customers to pay for time used.
The cloud model works for a number of scenarios including graphics rendering, computational fluid dynamics (CFD) simulations and other non-continuous operations. A prime example of an effective cloud workload came from Cycle Computing last month. The company built a 50,000-core cluster on Amazon Web Services to assist in cancer drug discovery. Spanning datacenters in four continents, the virtual supercomputer was used for only 3 hours, costing Cycle just $4,900. Their CEO mentioned that building a similar, private supercomputer would cost anywhere between $20-30 million.
For all the potential benefits cloud services can provide, they exhibit a number of limitations as well. Virtualization, latency and security are typical areas of concern (although a number of adopters actually experience higher security after migrating to the cloud). Virtualization taxes system performance, but that can be countered by choosing a bare metal IaaS provider. Latency on the other hand, can be more difficult to overcome as its performance relies on network connectivity between the client and datacenter.
Noted analyst and service director for Quocirca, Clive Longbottom, admitted that cloud services are not suitable for all HPC workloads. These situations include companies that provide uninterrupted services or rely heavily on supercomputing resources. Oil and gas, large pharmaceuticals, and financial industries fall into this category.
All told, Longbottom views HPC cloud services as the natural evolution of supercomputing:
We've gone from the supercomputers of old (the Crays and so on) to clusters, virtualization and then from grid computing to cloud.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.