April 15, 2010
Conceptually speaking, the public cloud is representative of one of the best contemporary examples of an economy of scale—just in the realm of technology versus, say, international markets or iPhones, for that matter. It is this grand cost-scale paradigm that is partially responsible for the dramatic uptick in buzz about the cloud for enterprises scientific and commercial over the last few years.
While it appears that the mainstream dash to the cloud is officially scheduled to begin now, scientific and research-driven HPC communities are unable to shed their dress suits for a pair of sweats to get in on the race. This is, of course, because they have dedicated systems that are tasked with chewing on highly specialized workloads, but this hesitancy is also because the public, private and hybrid cloud solution independently will not suffice. In short, it is impossible for large-scale computing operations to experience the benefits of HPC in the cloud (cost, waste management, etc.) without committing to one cloud model over another.
This inability to decide on a particular model for cloud adoption and implementation is not due to a general sense of resistance to the possibilities, the operational and infrastructural changes that will be required (perhaps costly in that dreaded up-front kind of way), or any shortcomings in the scientific computing community. It is, of course, a matter of what will be the most suitable model for running data-intensive applications without sacrificing performance for cost concerns.
Scientific HPC models have generally relied on the dedicated system model (their own cluster dedicated to research specializations with its own specific runtime environment that handles all aspects of workload management and deployment). While this provides a wonderful basis for research directives, the fact is, this is often either inadequate or wasteful. For one set of priorities, the current HPC environment might not be heady enough for the task whereas on another day, the vast resources (of all types—not just in terms of power) go to waste. Cloud computing provides a perfect outlet for enterprises of all shapes and sizes for those "down" times when a system is not being used at peak capacity.
The issue at stake here is not necessarily what to do with the irregularity of resources that pull scientific enterprises between having too much or too little, but rather, how they can maximize the results of the economies of scale presented by the vast, fluffy public cloud—in short, how they can rely on public, private and hybrid clouds in conjunction to create a workable, efficient solution that is beneficial for the research aims. The desire to boost grid computing by drawing on the exhaustive resources delivered by the cloud certainly exists, but for quite some time enterprise and scientific computing organizations were hesitant to make such a switch. This tentative response to using the cloud for grid operations was mostly because handling the workloads and general management operations was seen as an unnecessary evil—yes, it could all be done but who would want to handle the switchover process? And the answer was still often no, even if it was possible that massive cost savings could be realized following the unmistakable nightmare.
Most IT managers realize that adopting cloud solutions could make life far less complex, but whether they were worried about initial costs of effort and overtime to enact the process to implement the cloud or concerned that they might be out of a job is unclear. The fact remains, when you take away all of the daily items on the to-do list in the grid sphere (which include one scheduling, provisioning, and process management hurdle after another), you're left with a much more manageable end result. More manageable means less wasteful, less wasteful means more productive, and more productive means more money—whether for commercial profits or enhanced research possibilities.
It is this tension that creates the need for a solution that straddles the border between HPC and enterprise, between public and private, but more importantly, for hybrid cloud models. In line with that, on April 14, cloud computing management firm RightScale announced the launch of RightScale Grid Computing Solution Pack, which was created to handle grid processing in the infinite space of public cloud infrastructures, which of course includes Amazon Web Services. Using the public cloud space for massive data-intensive applications that require significant computing power is nothing new necessarily, but for smaller enterprise use, this is an important enhancement.
In an interview with HPC in the Cloud, CTO of RightScale, Thorsten von Eicken, commented on this topic, stating:
I see two things happening: one is that more in the traditional grid computing where someone needs to run a 60 or 64 node computation, the cloud has enabled us to launch this on-demand so the user who has a series of computations to run and has his personal cluster—his personal grid—can do his computation and shut it down when he's done. It changes this from a qeueing problem of "where do I sit in the priority queue to get time on the in-house cluster" to more of a provisioning and automation problem, which is what RightScale deals with… I want to run this software and this process, what button do I push?
You'd be really hard-pressed to find production private clouds today. It really is a technology that is in a phase of proof of concept; we need to see it implemented. We're [RightScale] working with a number of companies that develop the infrastructure to build private clouds. Our technology—our solution—works across both public and private clouds; we see interest in hybrid use cases (which is a term used in many ways)—situations where, for instance, a large pharmaceutical company may have a large internal grid; they may reserve it for larger runs or sensitive data that they cannot let outside of the firewall, but they want to provide the same environment for their researchers so they can fire off their private grid or private computer array in the public cloud in order to run research, tests, and runs that can be offloaded so the internal grid and infrastructure can be dedicated to the special runs that have to happen. So the notion of these hybrid use cases where you do some things internally and some externally is very much in demand and is something we support and are focusing on.
You can view the contents of the RightScale product release here.
This article is the product of research driven by the article "In Cloud, Can Scientific Communities Benefit from Economies of Scale?" which will be the subject of an analysis with one of the co-authors later this week. Check it out at http://arxiv.org/ftp/arxiv/papers/1004/1004.1276.pdf.
Many thanks to RightScale CTO, Thorsten von Eicken, and RightScale VP, Marketing Betsy Zikakis, as well as the authors of the above article.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.