June 29, 2012
Delivering high performance computing as a service – or even in the cloud – comes with a set of challenges, both technical and social. There are various aspects of the service model, including the people that need to be involved in the process, and the challenges faced when executing workloads on remote HPC resources. Taking into account these factors, HPC veteran Wolfgang Gentzsch and Burak Yenier, vice president of operations at CashEdge, have developed an HPC-as-a-Service Experiment that brings together industry end users, resource providers, software providers, and HPC experts.
The technology components of HPC-as-a-Service that enable multi-tenant, remote access to centralized resources, and metered use are not unfamiliar to this community. However, as service-based delivery models take off, with the promise of easy access to pay-per-use computing resources, our users have been mostly on the fence, observing and discussing the potential hurdles to its adoption in HPC.
Even with the challenges of data privacy, incompatible software licensing models, and a dozen other potential roadblocks, it's time we dip our toes in the water and figure out how to achieve the benefits of service-based delivery. How far are we from an ideal HPC-as-a-Service model? At this point, nobody knows.
What is fairly certain is that we now have the technology ingredients to make it happen. To glue it together into a coherent end-to-end process, the authors have come up with the "Uber-Cloud Experiment." We believe the technology is not the challenge anymore; rather it's the people who make service-based HPC come together. The major stakeholders: the industry end users, the resource providers, the application software providers, and the HPC experts.
The experiment is scheduled to begin later in July and will run for three months. At that point, the results will be made publicly available to the HPC community. Anyone interested in participating can register at www.hpcexperiment.com. More information about the experiment is available at http://www.hpcwire.com/hpcwire/2012-06-28/the_uber-cloud_experiment.html.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.