July 10, 2012
Large-scale data analyses enabled by Numatix, Numerate's distributed framework
SAN BRUNO, Calif., July 10 — Numerate, Inc., a technology platform company that is leveraging proprietary algorithms and the power of cloud computing to transform the drug design process, announced today that its drug discovery platform, based on its open-source distributed framework, Numatix, has been shown to reliably and cost-effectively scale to 10,000+ cores using spot instances on Amazon Web Services' Elastic Cloud Compute (AWS EC2). The 10,000+ cores were used to screen virtual compounds against predictive assay models in one of the company's commercial partnerships.
Numatix is a dataflow processing platform developed by Numerate that enables great scalability and flexibility in distributed computing with minimal operational overhead. Combining online processing with detailed dataflows, Numatix allows for the deep and interactive analyses of large data sets. At Numerate, Numatix is used for the computational assessment of billions of molecules in the search for new drugs and for the development of complex systems biology models of drug behavior in various animals and humans.
"This level of scalability is essential for Numerate due to growing demand for our drug design services in the pharmaceutical and biotechnology industry," said Nigel Duffy, Ph.D., chief technology officer. "This achievement demonstrates the power of the Numatix platform and its ability to tackle large-scale computational problems cost effectively. We designed Numatix from the ground up to be extremely robust to all types of failures, allowing us to take full advantage of spot instances and their cost savings."
"We were able to scale up to 10,000 nodes in the US-East zone of EC2," stated Brandon Allgood, Ph.D., Director of Computational Science. "And because the Numatix platform uses spot instances, the cost was only 2.7¢ per core-hour, or $270 per hour for a 10,000-core cluster – just one-third of the on-demand cost. Moreover, we were able to reach this scale while handling spot instance ephemeralness and maintaining a high level of security."
Allgood added, "The limiting factor on this run was our cloud provider-independent security layer. On less sensitive types of workloads, we expect that Numatix should be able to scale well beyond 10,000 cores."
Numerate is currently making Numatix available to a broad range of users in the life sciences. Future applications in other fields, such as finance, manufacturing and energy, can be envisioned. For more information, please visit www.numatix.net or contact email@example.com or follow us on Twitter @numatixData.
Numerate is a privately held biotechnology company pioneering new computational methods for making the drug design process more data-driven, efficient and predictable. Numerate's in silico drug design platform combines proprietary algorithms and cloud computing with traditional medicinal chemistry approaches to address, in parallel, the factors that determine the success and failure of a drug candidate. Numerate applies this proprietary platform to design and develop small molecule therapeutics in collaboration with a variety of partners in the pharmaceutical, biotechnology, and academic fields. For more information, please visit www.numerate.com (corporate site) and www.numerati.com (technical site).
Source: Numerate, Inc.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.