June 10, 2011
Not long ago, a user on Quora asked industry experts, what happened to grid computing, noting that it seemed to be all over the news just five years ago but seems to have been overtaken by cloud computing. The user wondered If there was something different about clouds that made them better or if grid computing had simply died.
Grid luminary Ian Foster from the Computation Institute and Argonne National Laboratory jumped into the fray to address this question, noting as he has during our recent conversations that when it comes down to it, grid is about federation while cloud is more about outsourcing.
Foster contends that grid computing is thriving in scientific research, especially as there is a powerful need to distribute vast amounts of data and to enable on-demand access. He points to a number of long-running grid computing infrastructures such as Teragrid, stating that resources like these have been benefifcal in cancer, neuroscience, high energy physics, astronomy and beyond.
Foster goes on to write the following:
In industry, the term "grid computing" has been used, rather oddly, as sort of a synonym for parallel computing (e.g., Oracle 10-G) and sometimes to mean what the BOINC guys used to call (confusingly) "distributed computing"--i.e., harnessing idle desktops.
To use the electric power grid analogy, cloud is really about getting the supply side right, driven by new sources of demand and enabled by good distribution (broadband deployment). Scientific grids never do a really great job of supply: the economic incentives aren't right. Thus I like to say that "cloud is about outsourcing; grid is about federation."
As to whether cloud is just a renaming of grid: maybe both are a renaming of utility computing, as described by Doug Parkhill in his 1966 book, The Challenge of the Computer Utility.
For more on this, check out our video series featuring Dr. Foster and a number of others who spoke about the cloud/grid differences and what new developments are shaping both.
Full story at Quora
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.