April 15, 2011
It seems that nearly every domain has its own range of community-level celebrities—individuals who have blazed new trails in their fields, creating new opportunities for innovation and progress. In the world of distributed computing, Ian Foster, current director of the Computation Institute at the University of Chicago, is one of those stars.
Often referred to as the “father of grid computing” (although calling him such in person would have been awkward—he’s quite a humble sort) Ian has been instrumental in the development of distributed computing as we know it today. Grid and now cloud computing have developed along their course due, in part at least, to some of the progress made by Foster and his team.
Foster took time out of his busy schedule at Argonne National Laboratory this week to speak about a range of important trends in research and computing. We spoke during the GlobusWORLD event, which was focused on the user and contributor communities behind the Globus Toolkit--a tool that Foster helped develop. Aside from the grid-building software at the heart of the summit, the Globus group introduced a new element to aid in massive file transfer called Globus Online, which Foster’s colleague Steve Tuecke describes in another piece of our video series.
Settle in for just a tick over fifteen minutes as Ian Foster shares some of the insights he’s collected over the last couple of decades.
While we did touch on some of the problems that the Globus team hopes to address with some of their most recent updates, the focus here was on considering the broader picture of how grid (and now cloud computing) are shifting research and science paradigms in particular. Both in the historical and current context, key developments in both grids and clouds are putting an increasing number of powerful tools in the hands of researchers to help them focus on their core mission versus become IT experts as well.
Ian’s thoughts on the relationship between grid and cloud computing are worth noting and have been, in some ways, addressed in a recent interview that Rich Wellner conducted on behalf of HPC in the Cloud. Some of our questions built on these ideas, particular in the arena of the idea of “Cloud = Hosting / Grid = Federation” which Foster describes.
We also touched on a large number of issues during our recent conversation that we try to encompass here at HPC in the Cloud. For instance, how can new technologies benefit research and innovation via simplification of the compute-end process? In other words, as the amount of data to contend with grows and the problems researchers face in terms of managing it parallel such growth, how can advances at the software and hardware levels detangle their processes and let them get back to their main focus--which isn't IT?
While the Globus team with Foster in the lead has made some unique technological progress, what they (and others at labs and universities worldwide) are doing goes far beyond mere creation of tools or methods for moving data or accessing distributed resources—they are setting the stage for the next great leap in computationally-driven innovation for science and thus ultimately, for human progress.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.