January 16, 2013
NEW YORK, Jan. 16 – Cycle Computing, the leader in utility supercomputing software, today announced it has ended its record-breaking 2012 by winning the IDC HPC Innovation Excellence Award. IDC recognized Cycle's 50,000-core utility supercomputer run in the Amazon Web Services (AWS) cloud for pharmaceutical companies Schrödinger and Nimbus Discovery. The unprecedented cluster completed 12.5 processor years in less than three hours with a cost of less than $4,900 per hour to facilitate computational drug discovery, and was recognized by IDC for its impressive return on investment.
The award capped a year of dramatic client growth and utility supercomputing accomplishments for Cycle, who recorded an 85% growth in new clients, as compared to 80% in 2011. Building off its storied success in the life sciences sector throughout 2012, the company has increased its sales and support staff and has expanded across markets, including energy, manufacturing, academic & government research, and financial services.
Cycle's 2012 business highlights include:
"At Cycle, we believe that many of the world's impossible scientific, engineering, and risk management problems become possible with access to the right compute infrastructure. This IDC award reflects an incredible year for Cycle, full of many milestones in helping humanity conquer new science through utility supercomputing," said Jason Stowe, CEO, Cycle Computing. "As we kick off 2013, we'll continue our leadership in delivering more software solutions to the growing market for technical computing, especially the increasing number of applications that need to finish strategic analysis at a fraction of the time and cost of traditional options, using 50 to 50,000 cores. Cycle's products are uniquely poised to increase our customer's agility and technological leadership by making previously impossible workloads, possible."
"In an industry that is evolving as rapidly as HPC, it's fascinating to be a part of the creativity and innovation we've seen in the past year," said Chirag Dekate, an analyst with IDC. "Cycle Computing's impressive 50,000 run for Schrödinger and Nimbus Discovery demonstrated a strong ROI from the use of HPC, and we were pleased to recognize their accomplishment."
About Cycle Computing
Cycle Computing is the leader in Utility Supercomputing software. As a bootstrapped, profitable software company, Cycle delivers proven, secure and flexible high performance computing (HPC) and data solutions since 2005. Cycle helps clients maximize existing infrastructure and speed computations on servers, VMs, and on-demand in the cloud. Cycle's products help clients maximize internal infrastructure and increase power as research demands, like the 10000-core cluster for Genentech and the 30000+ core cluster for a Top 5 Pharma that were covered in Wired, TheRegister, BusinessWeek, Bio-IT World, and Forbes. Starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.
Source: Cycle Computing
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.