January 08, 2007
Purdue University's Rosen Center for
Advanced Computing has become the largest provider of high-throughput
computing cycles on the National Science Foundation's TeraGrid.
Carol X. Song, senior research scientist in the Rosen Center and principal investigator for TeraGrid at Purdue, says that more than 4,300 computers of all sizes -- from desktop machines used by students to do homework and check e-mail, up to large, powerful research computers -- are linked together using the open source application Condor.
"By using Condor and making resources available over the TeraGrid, we are leveraging our national and international science resources," Song says. "We will continue to expand our Condor pool to include additional machines as well as machines at other campuses through regional grids."
By early 2007, Purdue officials expect to have more than 5,000 machines available in its Condor pool.
Miron Livny, professor of computer science at the University of Wisconsin, says that Purdue's Condor pool is the largest in the nation.
"Purdue is committed to a vision, and they are making that vision a reality. I am pleased to say that early on I worked closely with people at Purdue, and we shared this vision for research computing," Livny says. "I think it's wonderful that Purdue has taken the leadership on this on the TeraGrid. And I don't pass out these kinds of compliments often."
One researcher, Michael Deem, Rice University's John W. Cox Professor of Chemical Engineering, has used nearly one million hours of computer cycles to catalog the chemical structure of compounds called zeolites.
Deem aims to identify and categorize as many of these structures as possible so that chemical engineers can select the exact zeolite they need. This is just the kind of high-throughput job that works well on Purdue's distributed computing system.
"The throughput is much higher there than I can get locally because of the large size of the Condor pool at Purdue," Deem says. "Purdue is doing a great service to the scientific community by providing this resource."
The distributed computing resource is available over the TeraGrid, of which Purdue is one of nine resource provider sites. Charlie Catlett, director of the NSF's TeraGrid project, says that it is important to provide a variety of computing resources to researchers.
"High-throughput, or capacity, computing is extremely important to the TeraGrid user community," Catlett says. "Purdue and the Condor team have provided an excellent model for harnessing campus cyberinfrastructure in a way that benefits local users and also serves the national community."
The computers in the Condor pool at Purdue are used roughly 45 percent of the time for their intended purpose, 45 percent for Condor, and they are idle the other 10 percent of the time.
"This shows that our site can provide significant computing power to the nation without requiring dedicated resources," Song says.
Preston Smith, a systems research engineer for Purdue's Rosen Center, says that Purdue has refined its use of the software by using it as a secondary scheduling system on the computers, which allows the computers to be put to use whenever they are available instead of making them available only at certain times, such as at night. The primary schedule for computing jobs at the Rosen Center is the Portable Batch System, or PBS. Purdue uses PBS Pro.
"The thing we do that is unique is that we use Condor in tandem with PBS Pro," Smith says. PBS Pro was developed by Altair Engineering.
Condor and PBS Pro are connected so that they can "talk" to each other before a job is assigned to see what computers are available. This scheduling tool allows Condor to send a job to a computer whenever it's not being used instead of at set times, which allows many more unused computing cycles to be harvested, Smith says.
Livny says that he hopes Condor usage increases at other universities and that the now-wasted cycles can be put to good use.
"Other campuses should follow Purdue's leadership," Livny says. "I believe this is the right way for us to move forward, get organized and get resources together, and then go out on the national level and share resources with other institutions."
Purdue's Rosen Center for Advanced Computing publishes a daily graph showing Condor usage.
Source: Purdue University, Steve Tally
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.