November 19, 2010
Just a quick war story from this year’s SC event in New Orleans…
I was walking by the booth of a European high-performance computing center when one of the members of the research team saw me taking photos and reading a poster. He introduced himself and glanced at my press badge.
Nicole…”HPC in the Cloud”… “HPC”…he laughed heartily…”in the cloud?!”…
I didn’t respond right away; I could tell more incredulity and distaste was on the way. This is not entirely unexpected at HPC events.
“No." he said, "There is no HPC in the cloud because HPC can never happen in a cloud. Ridiculous.”
And without another word, he looked at me, shook his head and walked off.
It’s cool. I have thick skin. But if he had stuck around for another minute I would have been able to explain that yes, while we’re called HPC in the Cloud it doesn’t mean we’re blind advocates. Because we aren’t.
This sort of thing happens more often than you might think. Actually, looking back, I think one of the things that weirded me out a little about going to Cloud Expo (the pinnacle of the hype machine that props up some of what’s happening in the enterprise cloud space) is that no one blatantly questioned me about clouds for high-performance computing applications . Not once did anyone there say, “yeah, but for a lot of high-performance computing apps, there are some pretty big barriers—what say you?!”
I imagine that this is because there is a rather vast chasm between cloud computing for enterprise and cloud for scientific and technical computing and the Cloud Expo folks were definitely not catering to research scientists.
These days I’m prepared for the slew of viability questions that get hurled my way, but usually my answers never land home because people who feel it’s a ridiculous, unworkable concept aren’t receptive to counter-arguments or even the suggestion that there are a number of developments on the horizon that do indicate a more diverse array of use cases and proof of concepts that show some promise.
In other words, this question is not uncommon. But nowhere can it be more pronounced than at the biggest event of the year for those on the traditional end of bigtime computing.
So let’s backtrack for a second…
Before delving into this discussion about some perceptions about HPC and clouds, I think it’s best to save a little time and let Indiana University’s Ray Shepard encapsulate a common theme in responses I received walking around the show floor this week at SC10.
It only takes him a couple of quick minutes to describe what a portion of the HPC community thinks about cloud computing, so take a listen…
Shepard, like several others I encountered during this show and others that have been focused on traditional HPC, feels that cloud computing, as he directly states, is a slick marketing tool that has little relevance or value for HPC outside of testing the waters to move to more advanced systems.
There’s nothing at all wrong with this view necessarily since the number of solid use cases for routine practical high-performance computing applications running in the cloud, especially if you add in the virtualization layer (which is not essential to the definition of cloud since clouds can also refer to on-demand access), are still mounting.
In fact, it could be another few years before this paradigm sees any wider recognition than it does today simply because of where we stand in the technological maturation cycle. Some have argued that saying even a few years is unrealistic since there is still a vast amount of work to be done to remove the host of barriers, not the least of which are performance knocks, data movement issues, security, cloud-based application development challenges—you already know what else this list contains and that it’s longer than this, depending on what applications you’re familiar with.
What’s interesting, however, is that when I encounter conversations like the one I had with Ray, I do feel the need to remark on a few of the large-scale examples of cloud computing use cases for HPC; from NASA’s Nebula to some of the work at CERN to the range of enterprise examples.
Am I trying to convince or advocate? Nope.
Because yes Ray (and all others with whom I had similar conversations) it’s true, there really are some major challenges that present themselves the moment we step outside the solid use case of the already embarrassingly parallel example or the “bursting” model that saves an institution from investing in new hardware simply to handle the occasional peak load.
And it’s a thrill to be on that precipice where we can sit back and watch the adoption rates and the hype cycles duke it out.
The one thing that’s great about covering this space is that there is so much contention. And even more interestingly, these points of contention are not simply rooted in technological gripes or cheers for or against cloud computing for HPC—there are any number of more subtle (but not less important) cultural issues at play that make coming to an event like SC, which is the true HPC community summit of the year, an exercise in understanding how traditional modes of computing are changing, why they are being altered, and what drives resistance to change.
Posted by Nicole Hemsoth - November 19, 2010 @ 4:00 AM, Pacific Standard Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.