October 20, 2011
As part of its BigScience Challenge 2011, Cycle Computing is providing $10,000 in free CycleCloud time "to help researchers answer questions that will help humanity." The grand prize winner will receive the equivalent of eight hours on a 30,000-core cluster plus four hours of CycleCloud engineering support.
In his blog, Cycle Computing's CEO Jason Stowe, explains the impetus for the Challenge:
The problem is, today, researchers are in the long-term habit of sizing their questions to the compute cluster they have, rather than the other way around. This isn't the way we should work. We should provision compute at the scale the questions need. We're talking about taking questions that require a million hours of computation, and answering them in a day. Securely. At reasonable cost.
Stowe calls this "utility supercomputing," the ability to provision a TOP500 supercomputing resource for researchers to use for a few hours at a time. When they're done, the resources can be turned off, or reprovisioned for another purpose.
This model lets scientists do what they do best, focus on the research with the assurance that the resources will be there when they need them.
The challenge seeks "to help a single researcher with an un-askable question" — a cure for cancer, Alzheimer's, diabetes, climate change solutions. In addition to the grand prize, there are up to five finalist prizes of $500 of CycleCloud usage credit with four hours of CycleCloud engineering support.
The contest is open to individuals or researchers from non-profit organizations. Interested parties can find out more at CycleCloud BigScience Challenge 2011. Proposals are due November 7, 2011, and finalists will be announced at Booth #443 at Supercomputing 2011.
Full story at Compute Cycles Blog
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.