January 22, 2013
The UberCloud Experiment (also known as the HPC Experiment) started in July last year to explore the process of accessing and using remote HPC resources or HPC-as-a-Service. From an initial pool of 160 participating organizations located around the world, 25 teams were created, consisting of an industry end-user and their application, the software provider, the computational resource provider, and the HPC expert who handles the porting of the application onto the resource and serves as team manager.
Round 2, which began in December and features both CAE and life sciences applications, has attracted 300 participating organizations, with 20 established teams and more to come. The project management tool BaseCamp guides teams through 22 well-defined steps of the end-to-end process, while a new services directory, UberCloud Exhibit, lists UberCloud hardware, software and expertise services.
Now it is time to report back to the HPC community what we have learned so far, for example which industry applications have been implemented on remote computing resources and in the cloud; how the teams have faced and resolved major roadblocks; what the optimal end-to-end process looks like; and additional guidance and recommendations. While the Experiment continues, we are beginning to get invitations to present these findings at conferences. Here are some of them – we invite interested parties to stop by and talk to us:
If you would like to participate in Round 3 of the UberCloud HPC Experiment starting in April you can register now and we'll send you additional information. And if you want to learn more about some of the major services used in the Experiment, please go to the interactive UberCloud Exhibit.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.