June 30, 2010
If virtualization is one of the biggest hurdles to clouds being able to handle HPC applications with supercomputer might and speed, why not simply remove the abstraction and ditch the virtualization altogether? The result would be very recognizable to many in research and academia; it's the ages-old "rent a cluster" paradigm. And what is wrong with it? Clarification -- what is wrong with it now that the term "cloud" has all the gloss of a gleaming new datacenter and cluster rental as a concept (CRAC) seems far less appealing? Still nothing.
There are a handful of vendors offering time-tested HPC-as-a-Service to clients across the HPC application spectrum, but unfortunately, they are too often trumpeting these broadly useful services as cloud, which may not be the best approach. This is especially true if they're trying to reach traditional HPC folks who are often more likely to scoff at the cloud concept for their work (remember, their jobs might depend on waving away the fluff) than embrace it. Even those in the HPC space who are open to the idea of the cloud for their needs -- simply calling it cloud when it is simply renting time on a cluster might be misleading at first or even off-putting at the worst.
Here's the only problem with chucking the virtualization -- and it is not one that's likely to keep anyone awake at night, except for maybe a few marketing managers for vendors offering "cloud" services. It's not really the cloud that everyone recognizes if you remove the virtualization, is it? At least not by some definitions. But then again, getting into complicated definitions-based discussions isn't really useful since this space is still evolving (and defining it ala the grid days) will only serve to stifle development.
Fluff, But Not in Cloud-Like Way
One of the greatest sources of frustration to those evaluating alternatives to buying their own clusters is determining if and to what extent cloud computing will enter the picture. And if the group selecting the new solution is locked into one definition of cloud or another, chances are, they're thinking about the virtualization aspect (and all the sinister performance-related issues that entails). These perceptions, which are to varying degrees of true depending on what applications we're talking about, are instilled in the minds of anyone who doesn't already have a nice, pleasantly parallel set of applications to toss into to the cloud.
What is ignored far too often, however, is the value or ranking of the core elements of cloud. While virtualization is central to many definitions, HPC has no reason to rely on the same criteria that suits enterprise. For HPC, the cornerstone, the beacon is availability. It is on-demand access. One of the most valuable and attractive aspects of cloud across the HPC spectrum is, without a doubt, availability of resources -- and in a scalable fashion, no less. If HPC-as-a-Service eliminates the problems caused by a virtualized environment performance-wise while lending flexibility, scalability and immediate access to resources, clouds start to seem like more trouble than they're worth, at least in the context of a certain range of applications that are not cloud-ready to begin with yet are needed by shops that can't plunk down many thousands for a cluster.
What HPC-as-a-Service Really Means
HPC-as-a-Service is not new. You have seen this before. But the technology behind making it possible is being refined to the point where it is going to eclipse the more comprehensive, virtualized side of cloud definitions.
Cycle Computing CEO Jason Stowe summarized the concept of HPC-as-a-Service beautifully, stating "cloud HPC cluster users can start up clusters without having to worry about putting in place various applications, operating systems, security, encryption and other software." Yes, this is something that can be done in a private, public or even hybrid cloud environment with relative ease -- but only after the dues have been paid. After all, before entering into the blessed realm of the cloud there's some major work to be done. Major. You do not simply ship your data to Amazon and let them plug everything in for you, not if you're a small enterprise with a relatively light load and certainly not if you have any type of HPC applications. You no longer have a detailed view of your operating environment, nothing is tailored to your hardware, you have to program using specific APIs to make sure that everything is provisioned and setup properly or your experiment with the cloud is going to fail. It is no easy task -- at least not from any end users that have been directly interviewed by this little lady. No matter what the cloud structure, provider, expected use scenario, it is not something one can simply walk into and this is doubly true for HPC applications, of course, especially those that require some highly specialized behind the scenes manipulation to begin with."
Stowe continued that in the HPC-as-a-Service model, "Scientists can create clusters that automatically add servers when work is added and turn the servers off when the work is completed," which means that once the calculations are done, the researcher simply clicks what amounts to a power down button to put an end to the massive availability of resources. It is in this simplicity -- in this easy off and on capability -- the on-demand essence -- that this could revolutionize how HPC is managed.
According to Joshua Bernstein from Penguin Computing, a company that also provides virtualization-free HPC-as-a-Service (thus it's the rent-a-cluster paradigm where the environment is easier to configure and visualize not to mention manage), HPC-as-a-Service has enormous value for users for a number of reasons, among which simple economics sits at the heart. Bernstein says it is simple for customers to look at their current IT environment, whether it's a few machines or close to nothing at all and know right away if they have the $150k+ to invest in a new cluster. That's the easy part. Furthermore, in addition to the capex issue, there is also the important question about whether or not they have the floor space to accommodate it and more importantly, whether or not they have the systems administrative expertise to keep it humming. Bernstein suggests that if you're going to use a cluster at 30-50 percent capacity all the time, then you're better off buying a cluster based on observations over a year, or better, a three-year term. He notes, however, "it turns out that most of the time, customers don't run it at that rate all the time -- they'll say they run it at 100 percent but if we ask them about what it was like in the previous month, it turns out that it was at almost nothing so over the course of three years, it seems most are utilized at around 20-30% of the time. So it's much cheaper to rent than to buy."
HPC-as-a-Service, in other words, might make more sense than actual clouds for a range of applications that might otherwise be thrown into the peril of a hostile cloudy environment -- and makes it possible for smaller research centers and shops to actually compete without the investment. And herein lies that revolution that's going on.
On Demand Flexibility and Configurability Keys
HPC-as-a-Service as offered by SGI and its Cyclone, Cycle Computing, or Penguin On-Demand (POD) or even from smaller companies like Sabalcore, for instance, are formidable foes to the mega-IaaS/PaaS providers seeking HPC converts. The problem is, they're too often invoking the cloud name, which for this particular audience, might not be a good idea.
Drop standard definitions of cloud that too often hinge on virtualization and focus on one of the core elements that makes "cloud" attractive for HPC users. It all boils down to availability. It's having resources on-demand. This means there is no waiting for precious time for a job to run -- throw in the ability to scale back down or shoot off the charts and you've got yourself a deal. Supposedly, anyway. Furthermore, as Joshua Bernstein of Penguin noted of the Penguin On-Demand Service (POD), companies are able to try before they buy a cluster to see what is possible before they get their own to unleash on society -- again, viva la revolution.
The only problem right now is small conceptually but it's a really big deal from an adoption standpoint: when nearly everything has the "cloud" label slapped on it (which vendors can still get away with since the definitions depend largely on each vendor's marketing team's creativity) it can be almost impossible to look at one's options without completely overlooking solutions that might be far more appealing when compared to the standard public sense of cloud.
Perhaps HPC-as-a-Service providers should call themselves what they really are -- and leave clouds out of it. For now.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.