November 16, 2011
Amazon Web Services just announced its most powerful offering yet for supercomputing users that require the power of a large cluster on demand. The newest EC2 Cluster Compute Instance, called Cluster Compute Eight Extra Large (CC2), is aimed at businesses and researchers who require additional HPC capacity in an elastic, pay-as-you-go format.
To highlight the potential of the offering, Amazon created TOP500-class cluster by stringing together 1,064 such instances. The 240.09 teraflop system netted a #42 ranking on the 38th edition of the eminent list announced yesterday at SC11. The company notes that an array of 290 CC2 instances will create a smaller-scale system, with a speed 63.7 teraflops for less than $1,000 per hour.
The new Cluster instance includes 2 Intel Xeon processors, each with 8 cores, for a total of 88 EC2 Compute Units. The 64-bit platform comes with 60.5 GB of RAM and 3.37 TB of instance storage, connected to a 10 Gigabit network. The CC2 moniker comes from the instance type's API name: cc2.8xlarge.
In a blog post, AWS evangelist Jeff Barr, provides additional details:
We've enabled Hyper-Threading, allowing each core to process a pair of instruction streams in parallel. Net-net, there are 32 hardware execution threads and you can expect 88 EC2 Compute Units (ECU's) from this 64-bit instance type. That's nearly 90x the rating of the original EC2 small instance, and almost 3x the rating of the first-generation Cluster Compute instance.
The CC2 instance joins the other 12 instance types available on EC2, each with its own characteristics, such as CPU size, networking, storage, memory, and so on. Like the other instances, CC2 supports a variety of high-performance workloads, including image processing, genome sequencing, seismic analysis, financial modeling and engineering design. Organizations and businesses may choose to use EC2 as their primary computing resource or supplement their on-site cluster as dictated by business needs. For academic users, EC2 can offer a way around the long wait times often associated with departmental clusters.
According to the company literature, the Cluster family of instances provides "proportionally high CPU resources with increased network performance and are well suited for High Performance Compute (HPC) applications and other demanding network-bound applications." In addition to CC2, there are two other instance types in this class: Cluster Compute Quadruple Extra Large (CC1) and Cluster GPU Quadruple Extra Large (CG2). CC1 comes with 2 Intel Xeon X5570 quad-core processors (for a total of 33.5 EC2 Compute Units), 1.690 GB of instance storage in a 64-bit platform. The CG1 instance offers a similar profile, but with the additional processing power of 2 NVIDIA Tesla "Fermi" M2050 GPUs.
The CC2 instance is currently offered as a public beta, available only to users in Amazon's US East Region in Northern Virginia, with the company planning to add other regions throughout 2012. The CC2 instance is priced at $2.40 per hour, although lower a lower total cost may be achieved using Reserved Instances or by bidding on the EC2 Spot Market. Amazon also announced that the pricing for CC1 instances has been reduced to $1.30 per hour.
The AWS staff will be at SC11 through the end of the week. The team is also taking part in a variety of SC11 scheduled activities, listed here.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.