August 21, 2006
Strong competitive pressures demand that BestWidget Inc. reduce the time to develop the next version of its best-selling product by half -- while also improving quality, reducing manufacturing costs and ensuring adherence to environmental standards. Achieving this goal requires that the design team, spanning five locations across the globe, turn around design revisions four times faster -- while also performing an order of magnitude more testing and verification to increase product quality.
If we think of the innovative enterprise as a high-performance automobile, then our goal in addressing what, where, when and why is to ensure that fuel (computing, data and other resources) is delivered to its engine (the innovators) when needed -- not in a best-effort fashion, or after a multi-week manual provisioning process.
In this way, we can ensure that BestWidget designers can access and share data resources quickly, perform computations rapidly, and above all count on the availability of resources as they schedule their work. The company itself can create the highest quality products and services consistent with business priorities and objectives (and given available resources) across all competing tasks.
Enabling this agility requires new capabilities. It requires capacity planning mechanisms for matching supply and demand while taking into account constraints specified as business policies at each level of the infrastructure (ultimately, as in manufacturing supply chains, demand should drive resource planning and scheduling, within policy constraints, to deliver optimal service levels). It requires resource configuration, allocation and scheduling mechanisms to ensure that diverse and distributed assets throughout the enterprise are delivered as and when needed. It also requires monitoring and management mechanisms to track usage, to ensure that demands are met, and to diagnose and correct problems as they occur. Finally, these different mechanisms need to be integrated with enterprise IT infrastructure and tools.
No existing technology addresses all these needs. Product lifecycle management tools address information management requirements, but not the delivery of the computing environments needed to generate or process data. Cluster management tools and workflow tools address elements of workgroup operation and process, but not the larger questions of information delivery and computation scheduling across concurrent activities. Virtualization tools address the configuration of computational environments, but not other aspects of the physical IT infrastructure. Thus, enterprises are left attempting to support the innovation lifecycle by cobbling together disconnected proprietary tools in an ad hoc fashion. The result is non-standard, non-scalable, difficult-to-replicate and difficult-to-manage solutions with limited ability to respond to dynamic business conditions.
Where then should we look for solutions? I believe that Grid technologies have an important role to play. This claim should not be surprising. After all, members of the Grid community have been working for close to a decade on precisely the issues discussed here, with considerable success. For example, the LIGO astronomical observatory delivers 1TB of data a day to eight sites around the world, creating more than 120 million file replicas to date; the U.S. TeraGrid national infrastructure enables flexible, policy-driven access to computing and storage resources at eight science data centers; and the National Cancer Institute's Cancer Bioinformatics Grid provides access to data and services at 60 cancer centers. In each case, Grid technology (specifically, open source Globus software in these examples) is being used to accelerate the pace of innovation.
In the next year or two, I expect that we will see significant progress in the creation and application of IT infrastructures architected specifically to facilitate innovation, and a shift from thinking of IT solely as a cost center to recognizing IT as a value enabler. In the process, we will also see a significant change in how we think about the role of Grid technologies in creating robust, scalable, and adaptive enterprise IT infrastructures.
About Ian Foster
Dr. Ian Foster is associate director of the mathematics and computer science division of Argonne National Laboratory and the Arthur Holly Compton Professor of Computer Science at the University of Chicago. He created the Distributed Systems Lab at both institutions, which has pioneered key Grid concepts, developed Globus software, the most widely deployed Grid software, and led the development of successful Grid applications across the sciences. Foster is also the chief open source strategist and a board member of Univa.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.