March 23, 2011
For those who are used to running heavy-duty workloads on dedicated machines, it might seem like a stretch to say that it’s possible to procure and provision an HPC environment in minutes. This has been the highly suspect but much discussed “miracle” of IaaS.
This week Amazon Web Services is asking potential users to suspend this disbelief via a trial credit and video tutorial that is meant to show that this process isn’t complex.
The video, which provides live screenshots of the processes behind creating an HPC environment, uses a molecular dynamics tool to demonstrate how the process works following initial setup. From the selection of an HPC compatible AMI, choosing the number of servers, defining placement groups (to ensure low-latency communication), creating credentials and security groups, the tutorial is wide in scope when one considers that it’s only a tick over ten minutes in length.
Aside from using the video and demo credits to actually run an application, this is a good intro for those who have been keeping up with all the talk about public cloud resources but haven’t taken them for a spin yet. It’s difficult to get a sense of the details behind using cloud resources if you’ve never broken the seal on an interface, even if it’s one you don’t plan on using.
One thing that the non-technically minded viewer who is just nosing around to see what it’s all about might notice, however, is that while the process is simple enough it does require some knowledge that might not be gleaned over the course of a ten-minute video.
For instance, while you might have an idea of what OS you want to remain glued to, what instance type will be best for your particular application? Of course you can experiment with this to some degree but the right (or wrong) choices can end up costing you in the end. Amazon provides a thorough overview of what the instances entail but from conversations with end users, both those with and without much experience using Amazon’s IaaS offerings, the choices are not always clear cut.
One of the hallmarks (or some might say weaknesses) of Amazon is that it’s an Infrasturcture-as-a-Service provider in the most stark possible way…In other words, you’re given access to the machines you need but beyond that there is not as much hand-holding as some need to get up and running. Videos like the one in question provide a good jumping off point for getting the environment spun up but there’s some background know-how that needs to be present.
At the very least, for those who are new to the public cloud user interface, this walkthrough gives a succinct sense of where to start as you firm up your background knowledge to select the right instance types and further refine and define your HPC environment.
Full story at Amazon Web Services
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.