June 16, 2008
It looks like we have to say goodbye to our good, old grids of the past -- at least to all those beautiful features and capabilities envisioned 10 years ago, when grids were supposed to evolve toward coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, and even to extend beyond their scientific scope. This is a great vision, but it is becoming more and more obvious that in order to make it happen, we need much more time and effort than originally anticipated.
During the past 10 years, we have seen hundreds of grid projects come and go, passing away after government funding ran dry. Most of these projects did not have a realistic (nor pragmatic) sustainability strategy, let alone viable business and operational models for their infrastructures, tools, applications and services, or their intended users. Often, the only asset left after the project was the hands-on grid expertise of the project partners, which certainly is highly valuable but in and of itself does not justify all the effort and funding.
I'm sorry to say it, but, so far, grids have not kept their promise. We are stuck with grids in Gartner's "Trough of Disillusionment."
But maybe I am too pessimistic, or too impatient. Maybe grids by their very nature are so complex to design, build and maintain, and applications are so cumbersome to grid-enable and run, that it will take another 10 years of trial and error (and re-writing grid middleware?) to find the right path through the labyrinth of coming and going technologies and paradigms -- utility computing, autonomic computing, ASP, SOA, SOI, SaaS, PaaS, HaaS, outsourcing, hosting, virtualization, Web 2.0, mashups ... you name it. And just when we think we finally got it right, the technology, and even the cultural landscape, changes again.
So, here's the problem, built in: many of our grids (architecture, technology) are simply so complex that it is almost impossible to adjust them fast enough to take into account the ever-changing IT landscape. Remember, five years ago, the transition from the proprietary grid middleware stacks to service-oriented architectures, and then to include Web services? This killed many grid projects (e.g., UK e-Science projects) -- literally about $100 million worth -- in the midst of their learning and development curve. Was it worth this $100 million? Do we really have better (in the sense of interoperable, user-friendly, flexible, dynamic, etc.) grid middleware today? Looking at Web services, for example, do we really have to expose all the details of the infrastructure to the user, for the benefit of largest-possible flexibility, which then is left to the user? Isn't this kind of explicit flexibility the actual reason for an exponential increase in effort to adjust our grid systems to always-changing technologies and strategies?
And what about cloud computing? Clouds are easier to deploy, more user-friendly, more service-oriented, and more on-demand. Still, if clouds want to replace grids, they will face challenges similar to those that grids experienced. Even with simple clouds (like Amazon's EC2), challenges arise when it comes to reputation and trust. Who trusts what really happens with/to your application and data, behind the portal, within the Amazon or Google clouds? Many of the grid showstoppers we discussed for a long time, they exist for clouds, too; but why are they not being discussed with as much passion this time around? Perhaps because our common sense tells us companies like Amazon and Google are not going to risk their reputations and destroy a potentially big business.
When we think of grids, we immediately think of networks, resources and middleware, in all their wonderful details. When we think of clouds, we think of an elastic service for remote computing and storage -- simple, user-friendly, delivered at your fingertips. William Fellows, principal analyst at The 451 Group, says "clouds are grids done properly," which comes close to the common thinking today. A bit closer to reality (and more in the context of grids), today's clouds (a la Amazon) are service nodes sitting on the Internet, which tomorrow can become compute and storage nodes in your grid for simple tasks of your application workflow. Clouds can be a very useful utility that you can plug into your grid. Fellows calls them the "Third Way," a flexible option that can sit between your in-house infrastructure services on one hand and the complete outsourcing model on the other hand: "Utility 2.0."
But these clouds are not (yet?) ready for a full-blown, complex grid-enabled application workflow, which you usually find in enterprises and computational science today, and perhaps they never will be. Even a single grid service running in a cloud image (or using a service in the cloud) will very quickly face many of the roadblocks we know from grids today.
Concerning cloud computing, the real innovators to me are not Amazon and Google (sorry), who have tons of resources sitting idle in their datacenters while, at the same time, the community is thinking about new ways of doing computing. For me, the real innovator was Sun when, in 2004, it truly built its Sun Grid from scratch, based on the vision that "the network is the computer." As with other technology trailblazers, however, Sun paid a high price for being first and doing all the experiments and evangelization -- but its reputation as an innovator is here to stay. Sun Grid's successor, Network.com, is very popular among its few die-hard clients. This is not only because it is an easy-to use technology, but especially because of its innovative early users (such as CDO2), and because of the instant support users get from the Sun team.
A similar promising example is DEISA (Distributed European Infrastructure for Supercomputing Applications) with its DECI (DEISA Extreme Computing Initiative). Why is DECI so successful in offering millions of supercomputing cycles to the European e-Science community? There are several reasons, in my opinion:
If all this is here to stay, and the currently funded activities will be taken over by the individual supercomputer centers, DEISA will have a good chance to exist for a long time, even after the funding runs dry. Then we might end up with a DEISA cloud, which will become an external HPC node within your grid application workflow.
So what is needed to make grids successful? From what we learned from the recent past, we should lower our expectations in the first place. Then, we need to rethink most of our grid architectures, which often can't match the architectures of large software projects in industry. The goal should be to reduce complexity dramatically -- complexity in the middleware, services, access for the user, and in the claim for universality. We need to focus on specific e-infrastructures for well-suited applications, specific application areas, targeted communities, etc. When building larger, more general grids, we might think of a grid of grids, a hierarchy that leaves as much independence as possible for the smaller grids (or grid project partners), leaves the coordinating functions to the overarching grid, and, thus, bypasses the mental, social, and political barriers that usually arise through direct integration. In that way, the European Grid Initiative (EGI), for example, might have a chance; if the 38 National Grid Initiatives (NGI) agree on which grid layer in the hierarchy is doing what.
But what if the grids operated by the NGIs are not sustainable? Combining 38 complex things does not easily lead to simplicity and allow a cloud-like overlay. A good question for EGI (and certainly for OGF, as well): How simple would it be to have the NGIs wrap their grids as clouds, federating these clouds into a European “cloud of clouds” metasystem?
The good news is that clouds will help grids to survive. They teach grids that in order to be widely accepted and thus sustainable, they have to be simple, user-friendly, service-oriented, scalable, on-demand, SLA-driven, with simple APIs, and so on -- just like clouds.
Clouds will become dynamic components of enterprise and research grids, adding an "external" dimension of business flexibility by enhancing their home capacity whenever needed, on demand. Existing businesses will use them for their peak demands and for new projects; service providers will host their applications on them and provide software-as-a-service; and start-ups will integrate them in their offerings without the need to buy resources upfront. Setting up new Web 2.0 communities will become very easy.
With this sea change ahead of us, there will be a continuous strategic importance for businesses and sciences to support the work of the Open Grid Forum (OGF) because only standards will enable the easy building of e-infrastructures from many different components and the transition toward an agile platform for a wide variety of services. Standards developed in OGF guarantee interoperation of components best suited for your infrastructure and your application, and thus reduce dependency from proprietary building blocks, keep costs under control and increase business flexibility.
What all of this means is that grids will not disappear; they will only get cloudier, making up a bright future for ICT. Concerning the naming, I suggest a cloudy grid is still a grid, as a cloudy day is still a day.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.