April 02, 2007
The world of grid computing encompasses many aspects, and I feel that there is quality representation of many of them in this week's issue. For example, one (oft-forgotten) aspect of grid computing is the hardware resources that comprise the high-performance environments. In our feature article, Tom Gibbs takes a look at the evolution of processors, and how history seems to be repeating itself.
Drawing inspiration from the 1985 blockbuster "Back to the Future," Gibbs' article of the same name takes readers on a journey through time, showing how today's multi-core processors came to be, and how the issues they present for software developers mirror those presented by processor advancements in generations past. Of course, this being GRIDtoday, Gibbs ties the whole issue into what it means for grid computing -- and what it means, he surmises, is that grids will become increasingly important as Moore's Law comes face-to-face with the reality that chip performance can't grow at exponential rates forever. If you've read any of Gibbs' other articles in this publication, I'm sure you're just aching to see what he has to say on this subject. If, however, you haven't been keeping up, this is as good of a time as any to find out what Tom Gibbs is all about.
Moving on to another aspect of grid computing -- commercial applications of the technology -- we take a closer look at the acquisition of Tangosol by Oracle (which, last I heard, might have been for as much as $120 million). I got a chance to speak with an analyst (Massimo Pezzini of Gartner) and two of Tangosol's competitors (Geva Perry of GigaSpaces and Sam Charrington of Appistry) about what this news means for them personally and for the industry at large. The most interesting angle in this whole story might be the notion (although Oracle has said as much) that Oracle will use Tangosol's technology as its foot in the extreme transaction processing door. The market for this type of technology seems to be growing as more and more companies try to follow in the footsteps of Web giants like Google and Amazon, and Tangosol's Coherence solution certainly gives Oracle a viable, distributed alternative to its traditonal database, which isn't necessarily cut out for XTP. That said, not everyone thinks Oracle made the best possible decision. And then there is the possibility that other major software players might be spurred into buying similar technologies for themselves. All in all, it's an interesting story playing out in an emerging market, and although Oracle and Tangosol aren't talking too much yet, there is a lot to be learned from hearing what the industry has to say.
Of course, these two features are just the tip of the iceberg, as we have stories and announcements spanning the spectrum of grid. On the scientific applications side, we have two TeraGrid announcements (click here and here), and we have more hardware news with Dell's Cloud Computing System. Moving on to Web services, we have two security specs being ratified as OASIS standards, and AMD supporting distributed management. In the world of high-speed research networks, we have ORION upgrading its infrastructure, and on the storage side of things, we have Oracle and Sun transforming data warehousing by offering solutions comprised of technology from both vendors.
Looking forward, expect to see more on why DataSynapse has moved away from "grid," as well as some lead-in coverage of the upcoming OGF20/EGEE User Forum, which promises to be a highly attended and, hopefully, very productive event. On that note, I should point out that there are a lot of conferences coming up in the next few months, and potentially a lot of news, and we'll do our best to make sure we not only cover the news coming out of them, but also offer deeper insight where possible.
So, until next week ...
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - April 02, 2007 @ 11:26 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.