July 13, 2010
BERKELEY July 13, 2010 -- Today Amazon Web Services (AWS) launched Cluster Compute Instances for Amazon EC2, which makes high-bandwidth, low-latency high performance computing (HPC) resources available in a cloud-computing environment. To ensure that the new Amazon EC2 service will be able to handle a gamut of demanding HPC applications ranging from electronic design automation to financial services, Amazon Web Services worked closely with researchers at the Lawrence Berkeley National Laboratory (Berkeley Lab).
"Many scientific research areas require high-throughput, low-latency, interconnected systems where applications can quickly communicate with each other. NERSC has extensive experience in setting up and maintaining these types of high-performance computing systems and we were happy to share this expertise in our collaboration with AWS," says Keith Jackson, a computer scientist in the Advanced Computing for Sciences Department of the Berkeley Lab's Computational Research Division (CRD).
The National Energy Research Scientific Computing Center (NERSC) at the Berkeley Lab is the primary high performance computing facility supporting unclassified scientific research sponsored by the U.S. Department of Energy. NERSC serves approximately 3,000 researchers annually in disciplines ranging from cosmology and climate to chemistry and nanoscience. To ensure that NERSC computing systems can successfully handle the wide range of scientific computing applications required by its users, the center's staff runs a series of comprehensive benchmarks on every new machine procured by the facility. Researchers in NERSC's Software and Programming Group also developed the Integrated Performance Monitoring software (IPM) to measure how well scientific applications perform on these HPC systems. To test the HPC performance of Cluster Compute Instances for Amazon EC2 the Berkeley Lab team applied these same tools to the company's new offering.
"When we applied these tests to the new Cluster Compute Instances for Amazon EC2, we found that the new offering performed 8.5 times faster than the previous Amazon instance types," adds Jackson, who led the Berkeley Lab portion of the collaboration.
Magellan, a Department of Energy project funded by the American Recovery and Reinvestment Act, is investigating whether cloud computing could meet the specialized computing needs of science. As part of this project NERSC experts are looking into the characteristics that are required to support a scientific HPC workload in a cloud computing environment, and are making these findings available to providers. They are also conducting comparative studies to understand how commercial clouds behave and whether they will be beneficial for science research.
In addition to Jackson, two members of the NERSC's Magellan research team contributed to this collaboration with Amazon Web Services, Lavanya Ramakrishnan and Shane Canon. Other Berkeley Lab collaborators include: NERSC's John Shalf, Harvey Wasserman and Nick Wright; Information Technology Division's Krishna Muriki and Qin Yong; and Greg Bell, who is currently with the Energy Sciences Network.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.