July 09, 2007
With half of the year now officially behind us, it seems like a perfect time to revisit some of the best and most-read articles that have appeared in GRIDtoday thus far. None-too-surprisingly, perhaps, these articles span the entire spectrum of the grid community, from the research world and the TeraGrid to the ongoing migration of old-school grid vendors into the world of datacenter management. And you don’t have to look too closely to spot the trends that are emerging in the areas covered by these articles.
As I have stated before (not that you needed me to tell you), grid computing is soooo much more than simply connecting a bunch of PCs and letting them run wild on some HPC applications, it is now all about connecting everything and sharing/managing data across any available resources. Grids in areas like health care are serving as global repositories for medical data that allow collaboration among peers worldwide, and in the commercial sector, some vendors who made their names on grid computing don’t even like uttering the term, instead opting for a variety of terms based around virtualization.
It certainly has been an interesting six months in the world of distributed computing, so, in case you missed them the first time, here are some previous features, in no particular order, that tell the story better than I can in this limited space:
On top of these articles, I also want to direct your attention to several of my columns from the first half of the year, which help convey my thoughts on some of the trends I have seen taking place thus far in 2007:
1. “A Whole Lotta Java”
Of course, we can’t live in the past, so this week’s issue also features two articles that focus on the future, with the implication being that there might be more to the relationship between SOA and grid computing than initially meets the eye. Srikanth Seshadri of Torry Harris Business Solutions starts things out with his overview of SOA and where it fits into today’s distributed IT infrastructures. While it might seem elementary to anyone with a thorough understanding of SOA, Seshadri does a great job of making the case for the technology’s applicability with some specific business examples.
Taking it a step further, Fermin Castro of Oracle suggests in his piece “SOA and Grid: A Successful Marriage of Two Paradigms” that because SOA is the cure for superfluous software and grid computing is the cure for superfluous hardware, they are great near-perfect complements to one another. He also relies on a solid enterprise example to illustrate his point and convince readers that SOA-enabled apps running on an enterprise grid maximize the efficiency and effectiveness of both paradigms. If we’re to accept the old adage of “where there’s smoke, there’s fire,” then perhaps Castro is onto something. After all, grid computing has seen its fair share of hype in the past few years, and SOA’s hype might not have peaked yet, so maybe the considerable amounts of smoke produced by both paradigms really do lead to something terrific when taken in together.
As for what the rest of the year will hold, only time will tell. Grid, virtualization and SOA all will be hot topics at the upcoming Next-Generation Data Center (NGDC) conference in San Francisco, and although we've seen this convergence coming for some while, this inaugural event might help to cement its validity in the minds of any doubters. We also can look forward to seeing a new Open Grid Forum president named within the next few months, and that could have some major implications, as well. Current president Mark Linesch has done a lot of work to bridge the gap between the academic and enterprise worlds, and whoever fills his shoes will have to continue this work in order gain (or regain) confidence from the business world. As NGDC illustrates, the grid computing water is muddying rapidly as it mixes with other hot technologies for the datacenter, and it is of the utmost importance for the OGF to remain relevant if we are ever going to see the interoperability so many are seeking.
It should be an interesting six months, to say the least.
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at email@example.com.
Posted by Derrick Harris - July 09, 2007 @ 11:12 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.