May 11, 2011
Chances are, if you’ve been lurking around here for some time you’re already quite familiar with cloud computing in the HPC context. However, it’s easy to get lost in the minutia that constitutes those clouds—the management layers, virtualization, latency, and beyond...
To put things into perspective, we’re posting provide a decent overview (and a link for some free time on Azure, which is running in tandem with the free Amazon trials) from a researcher focused directly on the practical applications of running HPC applications on remote resources.
Rob Gillen, a cloud computing researcher with Planet Technologies out of Knoxville, Tennessee spent a brief few moments on video to lay down some of the core concepts behind scientific uses for HPC clouds.
In the brief video below, he carves out the concept of cloud as it applies to the technical and research computing space and provides a few details about how clouds signal the democratization of large-scale computing.
Gillen’s host asks him what HPC encompasses generally, to which he provides a litany of examples. However, he notes that HPC cloud computing is the “lower end of the HPC space” noting that it works well for average researchers or academics that lack access to high-end machines.
Using Microsoft’s Windows Azure as a starting point, he provides the example of the genome sequence alignment tool BLAST, which runs as an Excel worksheet that is used to define problems, fill in details and shoot it off for remote processing. He notes that this is where the democratization layer comes in. For instance, a professor can use actual BLAST in a class but when it’s over, just shut down and stop incurring charges.
Outside of the rapid-fire definition, did you happen to wonder who you contract right this moment to build you a wall-to-wall dry erase room like the one shown?
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.