April 30, 2007
First things first: Just as the definition and usage models for grid computing continue to evolve, so does GRIDtoday. Take, for example, our old, reliable table of contents, which has undergone some noticeable changes this week in terms of section headings. While it might not seem like a big deal, we believe this new, dynamic list of sections better represents today's grid landscape and also serves as notice for things to come. The GRIDtoday team is dedicated to making our Web site as useful as possible to readers, and the updated sections are just one aspect of the evolution of the publication.
Aesthetics aside, however, this week's issue also features a lot of interesting articles and announcements. In my humble opinion, though, the most notable is the article bearing my byline. When I read a few weeks back that Platform was upping its investment in financial services, I knew there was a story there if I looked a little deeper. It turns out I was correct. Not only did I get the "how's" and "why's" of the news from Songnian Zhou and Jim Mancuso, but I also got the scoop on Platform's vision of where grid is headed. Not surprisingly, Zhou's beliefs mirror what I've been seeing over the past year or so, which is a switch to grid making its way into the datacenter and encompassing far more applications than those of the traditional HPC variety. By speaking with Thanos Mitsolides of Lehman Brothers, one of Platform's financial services customers, I heard firsthand how Lehman's vision for its grid coincides with what Zhou is predicting. I really suggest reading this article, as it gives you a taste not only of how financial services companies are currently benefiting from grid computing, but it also points to several ways in which they, along with users across many markets, will take advantage of grid's evolution in the years to come.
Of course, if you've ever spoken with Zhou, Platform's outspoken founder and CEO, you know that he has no shortage of things to say, and he gave me some something to think about that could affect any future definitions of "high-performance" or "high-productivity" computing. In the coming months, Zhou told me, we can expect to see announcements around the availability of cluster servers -- that is, several boxes packaged together with the hardware, interconnects, operating system, software stack, etc., all included. Platform software, he said, could be a big part of that software stack. These cluster servers, he said, will look like SMP machines, but will cost only 10-15 percent as much.
Adoption of these types of machines would seem to go hand-in-hand with the trend of smaller companies implementing more-focused, less-costly grid-like infrastructures to handle their business-critical applications. This bottom-up approach, coupled with the top-down approach of large companies deploying enterprise grids and large clusters, could potentially bring distributed, or at least parallel, computing capabilities to the majority of enterprises, both large and small. Perhaps it's a pipe dream, but if it materializes to some degree, it will go along way toward validating the evangelizing many of us have been doing over the past few years.
Moving from the future to the past, I've read numerous blogs this week bemoaning the shutdown of United Devices' Grid.org site. While the reason given for laying to rest the projects being undertaken on the site is vague at best -- the project "has completed its mission to demonstrate the viability and benefits of large-scale Internet-based grid computing" -- my assumption is that UD simply has a lot bigger fish to fry than Grid.org. After all, just like grid computing, UD has come a long way since the days of scavenging cycles from PCs. I'm not saying that model isn't valuable -- heck, a lot of great work is being done by countless projects worldwide -- but for a company working to transform its image (UD now calls itself "The Experts in Application-Centric Virtualization") as well as its product set, there might be a better use for those resources. Plus, with BOINC now carrying the volunteer computing torch, now is as good of a time as any for Grid.org to take its leave.
As for the rest of this week's issue, there are just too many noteworthy announcements to single any out. The "In the Enterprise," "Scientific Applications," "ISV Applications" and "SOA/Web Services" sections are particularly strong, but I would peruse the whole thing just to be sure.
Comments about GRIDtoday are welcomed and encouraged. Write to me, Derrick Harris, at firstname.lastname@example.org.
Posted by Derrick Harris - April 30, 2007 @ 11:04 AM, Pacific Daylight Time
Derrick Harris is the Editor of On-Demand Enterprise
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.