April 30, 2012
Determining the most cost-effective HPC infrastructure can be a daunting task for users. While each case is different, a number of key factors can be considered in the decision to run HPC operations in house or in the cloud.
Last week, AMDIN magazine interviewed Bill Nitzberg, CTO of Altair's PBS Works Division. He provided insight into infrastructure decisions and discussed Altair's HPC management software as well.
A self-proclaimed cynic of cloud computing, Nitzberg pointed out past network-based computing models: "…in the '70s with distributed computing, in the '80s with network computing, in the '90s with network Sparc stations and cluster computing, and then in the 2000s with grid computing – and now we have cloud computing."
However, a skeptical attitude didn't keep him from recognizing opportunities born from the platform. For example, enterprise email servers running at nearly 20 percent utilization, resulting in untapped resources. Nitzberg believes datacenters can consolidate these systems to reduce waste.
HPC is different however, as most systems are employed heavily. It's not uncommon to see more than 70 percent utilization. In this respect, cloud computing has to present a different set of benefits for HPC users.
One advantage is the layer of abstraction created by an on-demand infrastructure.
"…when you think of cloud on the business side, you don't really care what is behind the interface when you log in. It could be a whole bunch of people, or it could be a whole bunch of machines. You don't have to know. And that actually carries over from the data center market to the HPC market."
Beyond the interface, ROI can become the ultimate decider. While larger enterprises can save capital by building and operating their own clusters, the story isn't the same for all organizations.
"If you are a small player, and only once a month out of the year – or, say, two months out of the year – you redesign some part, and you only need to use HPC computing for two months out of the year, then actually using HPC cloud computing is a huge advantage over trying to buy and manage your own," offers Nitzberg.
Altair has developed its tool suite with in-house and cloud infrastructures in mind. Applications like Compute Manager, PBS Professional and HyperWorks were all built on a cloud stack, allowing them to function in an on-demand environment as well.
Those features play to Nitzberg's advice for new HPC users. If a tool suite user were considering a cluster, the CTO would advise them to start with HyperWorks On-Demand and make a final decision based on their experience.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.