November 19, 2011
During the SC11 show this past week, I sat down with Cycle Computing CEO Jason Stowe to learn more about the CycleCloud BigScience Challenge. Cycle crafted the contest based on the noble idea that science should not be held back due to a lack of computational resources, prompting the company to put out a call to non-profit institutions: do you have an HPC problem that will benefit humanity in a large-scale way? The best of the respondents would be rewarded with some serious free cycles, about 300,000 compute hours of CycleCloud time in the Amazon Web Services cloud infrastructure.
Stowe is enthusiastic about the possibilities for enabling big science. Despite only having a short window to submit responses, Cycle heard from many worthy candidates. The only snag in the process, from Stowe's perspective, came when video game developers showed interest in the free cycles. Stowe worried that with all the glitter of the pop-sci coverage, people would miss what was truly gold. He was disheartened to hear that people wanted to run video games, explaining that while they may require the problem-solving of our field, this project has a more humanitarian bent.
Cycle selected the five most compelling finalists based on two primary requirements. The main one being the application's benefit to humanity and the second that the workloads had to be well-matched to a cloud infrastructure with no significant I/O overhead. The selections were revealed at the SC opening gala on Monday night in the Amazon booth. Initially, the finalists were to receive the cycle equivalent of $500 each, with $10,000 reserved for the best of the bunch. Amazon, however, sweetened the pot, putting the finalist award at $1,500 and bumping up the grand prize to $12,500, which translates into 300,000 compute hours worth of computation to benefit humanity.
The selected applications are all aimed at solving critical problems, such as Parkinson's disease, diabetes, stem cell research, human genome research, and photo-voltaic cell study. The finalists come from such respected institutions as Harvard Medical School's Green Energy project, the University of Wisconsin, the Ross Lab in Munich, and the University of Notre Dame.
Stowe makes the case that we need to reframe the way we approach scientific challenges. He believes the problem should dictate the size of the resource, not the other way around and the bursty-model is one way of enabling that in his opinion.
"Scientists can now get research done far faster and cheaper than they ever have been able to do before thanks to grabbing these kinds of really large resources for very short periods of time to answer specific questions. We want to get people out of the habit of constraining the questions they're asking to the size cluster that they currently own or can afford and instead get them asking building the infrastructure to answer the question that really needs to be asked and that's why we did this challenge."
A worthy cause in its own right, strong outcomes could serve as proof of concept to engender wider support for the on-demand model.
As for who's going to win, it's in the judges' hands now. The notable who's-who-level panel includes Kevin Davies, editor-in-chief of Bio-IT World, Matt Wood, evangelist at Amazon Web Services, and Peter S. Shenkin, vice president Schrödinger. Stowe will also be weighing in on the decision, which he says should be finalized by December 2011 or January 2012.
Editor's note: more SC11 coverage coming soon: stay tuned.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.