September 19, 2011
NEW YORK, NY, September 19 -- Cycle Computing recently provisioned a 30,000-core cluster in the cloud, the first of its kind in the industry and the company's second publicly announced massive cluster run this year. This milestone for scientific computing follows Cycle's earlier scaling of a 10,000 core cluster in April. Since 2005, Cycle has helped its clients maximize the world's compute resources through its reliable, secure and elastic HPC solutions, both internally and in the cloud.
The global 30,000-core cluster was run with CycleCloud, Cycle's flagship HPC in the cloud service. Automating the process of provisioning resources and replicating data across two continents, CycleCloud performed hundreds of thousands of computational tests with the run time per job averaging 37 minutes and the total work completed nearing 100,000 hours. The Global 500 pharmaceutical company's researchers completed nearly 11 years of molecular dynamics computing in a few hours, at a cost of $1,279 per hour at peak, and it required no upfront capital from the client.
Cycle Computing's flagship cluster and performance analytics software product, called CycleServer, was used to track utilization, diagnose performance and manage the progress of the scientific workflow. In addition to the CycleServer software, Cycle engineers leveraged open source projects including Condor, Linux, and Opscode's Chef cloud infrastructure automation system.
Cycle Computing also announced today a new CycleServer plug-in for Chef monitoring and analytics, called Grill, providing visualization into what's cooking infrastructure-wise for this 30,000-core Chef environment. Opcode's Chef software enabled Cycle to consistently configure over 3,800 servers for this cluster, shaving off days of preparation and operational overhead. Grill enables CycleServer's visualization and analytics-based alert technology to now support data about Chef installations.
"Cycle is unique amongst our Chef users in leveraging Chef's ability to configure this scale of infrastructure to solve scientific problems," says Christopher Brown, Chief Technology Officer of Opscode. "We think CycleCloud is emblematic of the kind of agility Chef provides users, and are excited to see the Grill plug-in supporting Chef in Cycle's suite of HPC analytics tools."
Leveraging CycleCloud software and Cycle's HPC proficiency delivered these stats:
-- Infrastructure: 3,809 instances with 8-core/7-GB RAM
-- Global-scale: Multi-datacenter clusters with simple user interfaces
-- Cluster Size: 30,472 cores, 26.7TB RAM, 2 PB of disk space total
-- Security: Engineered with HTTPS, SSH & 256-bit AES encryption
The end-user experience for using CycleCloud is:
-- User Effort: One-click global cluster at massive scale
-- Start-up Time: Thousands of cores in minutes
-- Up-front Capital Investment/Licensing Fees: $0
-- Total CycleCloud and Infrastructure Cost: $1,279/hour
"CycleCloud is dedicated to changing the way the industry looks at scientific computing," said Jason Stowe, founder and CEO, Cycle Computing. "With this 30,000-core cluster under our belts, we have found our work is not only repeatable, but customizable to meet the timeframe and budgetary needs. We will continue to write software that creates secure, mega-elastic and fully-supported cloud clusters, and strive to empower researchers to answer questions that were unanswerable before CycleCloud."
To learn more about the development of the 30,000-core cluster and Cycle's projects leading up to this accomplishment, please visit the Cycle Computing blog: Compute Cycles (http://blog.cyclecomputing.com/).
Opscode is the leader in cloud infrastructure automation. We help companies of all sizes develop fully automated server infrastructures that scale easily and predictably, can be quickly rebuilt in any environment, and save developers and systems engineers time and money. Opscode's team is comprised of web infrastructure experts responsible for building and operating some of the world's largest websites and cloud computing platforms. Opscode is headquartered in Seattle.
About Cycle Computing
Cycle Computing, a bootstrapped, profitable software company, delivers proven, secure and flexible high performance computing (HPC) and data solutions since 2005. Cycle helps clients maximize existing infrastructure and speed computations on desktops, servers, and on-demand in the cloud. Thanks to our CycleServer HPC management software and our CycleCloud fully-supported & secured HPC clusters, Cycle clients experience faster time-to-market, decreased operating costs, and unprecedented service & support. Starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.
Source: Cycle Computing
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.