September 17, 2007
In this interview, new OGF president Craig Lee (of Aerospace Corporation) discusses a variety of topics, ranging from what he thinks will be his key focuses and challenges during his tenure to the importance of working with other bodies and embracing new technology areas.
GRIDtoday: Congratulations on your appointment as the next president of OGF. Tell us a little bit about your background and why you wanted the job?
CRAIG LEE: My background is in parallel and distributed processing and over the years this naturally led to my involvement in grid computing and the Grid Forum. After discussing the opportunity with the OGF Board, and management at The Aerospace Corporation, I realized that the stars had aligned between my technical interests, my desire to serve the grid community and my corporate responsibilities.
I've seen this field evolve tremendously over the years. In grid computing, as in service-oriented architectures, utility computing, ubiquitous computing, etc., the key issue is the management of shared resources. For any of these technologies to be effective, there must be a critical mass of adoption in several key functional areas, such as catalogs and discovery, job submission, workflow management, resource virtualization and provisioning, and data management.
We must also recognize that there will be a spectrum of solutions to meet different requirements == from lightweight mash-ups that enable rapid prototyping and deployment to applications that need robust security models and support for virtual organizations. This spectrum of industry needs requires precisely the kind of pervasive adoption efforts that are at the heart of OGF. OGF has a rich history and a bright future and I am excited to be serving as its next president.
Gt: How do you think your background in the aerospace industry, which I assume is very HPC-oriented, will help or hinder your ability to relate to and, in fact, to relay the promise of grids to, mainstream companies?
LEE: While there certainly is a lot of HPC in the aerospace industry, requirements actually run the gamut. There are datacenters and lots of data repositories. Resource integration and interoperability is a huge issue. Many of these same problems exist in mainstream companies.
With regard to The Aerospace Corporation, in particular, we are a non-profit, federally funded research and development center (FFRDC) for all space-related technologies. This means anything to do with satellites and their ground systems. Existing satellite ground systems are essentially grids, but were individually designed from the ground up and statically configured, with no particular distributed computing standards. There is tremendous momentum to make these systems commercial-off-the-shelf (COTS) through the use of service-oriented architectures to reduce acquisition and operation costs. Hence, the adoption of service grids by the IBMs, Boeings and Lockheed-Martins of the world is a key goal.
Some people may feel that a non-profit is insensitive to market forces, but working at Aerospace Corporation may actually be an advantage when it comes to my work as president of OGF. Our corporate raison d'etre is to facilitate the maturation and adoption of useful technologies for space. When it comes to ground systems, this is not unlike the broad commercial marketplace. I have no other goal but to facilitate the adoption of the best technology as quickly as possible. This means bringing consensus and stability to the technical marketplace.
Gt: Obviously, the grid landscape has evolved quite a bit in the past couple of years -- and a whole lot since the GGF's inception. How important do you think it is for the OGF to stay aligned with changes in the industry?
LEE: Aligning with changes in the industry is a critical goal for OGF as an organization. We've led some of these changes, such as the HPC Profile and the use of JSDL. We've also influenced work in the wider community. The GLUE information model, for instance, was developed for grid entities and we are now working with the DMTF to harmonize with their Common Information Model. We've also adopted technology where necessary and appropriate, such as using WS-Security in the HPC Profile. Another example of alignment with the broader landscape is the recasting of the Open Grid Service Infrastructure (OGSI) to use the emerging Web services specifications.
The fact is that all approaches to distributed computing require much the same fundamental capabilities, but different organizations in different market segments look at it in different ways. Harmonizing these efforts across organizations and getting a dominant practice in the marketplace is critical. I'm fond of saying it's like getting different "tribes" that all use different nouns and verbs for essentially the same things to talk to one another.
Gt: What are your thoughts on pushing the commercial grid agenda?
LEE: Achieving commercially available grid components and services will enable entirely new areas and applications for research, industry, commerce, government and society. Only a few years ago, the Internet was an academic and scientific domain. Now, billions of people use it for everyday activities. We want and expect grids to produce similar benefits for both industry and research. From a research perspective, commercialization of grid technologies will enable low cost, off-the-shelf capabilities for scientific research and innovation -- in much the same way as clusters did.
From an industry perspective, widely available grid products and services are critical to mainstream adoption. Grids can enable more automated interactions between companies, tighter integration of global operations and enhanced interoperability, all resulting in lower costs and greater competitive advantage. For an information society and economy, the possibilities are tremendous.
In the here-and-now, however, the commercial grid agenda is going to be a multi-faceted issue. Virtualization, service-oriented architectures, storage networks are all speaking to different commercial segments and being developed somewhat independently to address specific needs in those different contexts.
Gt: In a recent interview, current president Mark Linesch discussed the relationship between grid computing, virtualization and SOA. What are your thoughts on the importance of these technologies as the OGF continues to evolve? Are there any other complementary or derivative technologies that you think ought to be on the organization's radar in the coming years?
LEE: OGF has started a set of activities whose goals are to harmonize the development of grids, service architectures and virtualization that I fully intend to continue. Server virtualization is having a huge impact on how data centers address the service provisioning problem. It allows them to provision a service through a virtual server on a cluster that can be dynamically assigned on-the-fly. Server virtualization also offers important capabilities for security by being able to isolate malicious processes.
How can we support this same kind of capability at scale in a distributed environment? Grids enable server virtualization to be pooled, aggregated and managed across sites. Grids enable policy-driven usage of these virtual resources whereby loads, completion times and graceful failover can all be transparently managed. OGF's Grid and Virtualization Working Group is an example of an effort within OGF that’s looking at some of the intersection between grids and virtualization for instance.
Service-oriented architectures, or simply service architectures, have a natural resonance with grids. The find-bind-use concept is native to both. Again, this is an instance of where different approaches and implementations have to be harmonized. The notion of service objects and data objects has a strong similarity to WSRF, which originated in GGF and then was sent through the OASIS process to get buy-in from the larger Web services community. OGF must forge alliances with other organizations such as the Open SOA Consortium to bring consensus to the marketplace.
Another important development is Web 2.0, which offers an easy way to do rapid prototyping of distributed systems. Just because of this simplicity and ease of use, real communities of use will grow up around it. This is especially telling since many people complain that traditional grid tools and toolkits are too complex and cumbersome to install, use and maintain. I think that there needs to be a continuum of tools -- from the easy-to-use, very lightweight mash-up tools like Web 2.0 that have simple security and discovery models, to more complete, traditional grid tools that have robust security models, support for virtual organizations, attribute-based authorization, etc. There should be a growth path between the two extremes whereby additional capabilities can be added as needed, when necessary.
It's very interesting to note that half of all the registered Web 2.0 URLs are GoogleMaps-related. That is to say, they are geospatial in nature. It's probably no accident that Google is pushing Keyhole Markup Language (KML) through the Open Geospatial Consortium (OGC) standardization process. Equally interesting, and certainly no accident, is that OGF is starting a collaboration with OGC to integrate its standard geospatial tools with grid-based, distributed resource management. To start with, we want to back-end their Web Processing Service (WPS) with grid computing resources to enable large-scale processing. The WPS could also be used as a front-end to interface to multiple grid infrastructures, such as TeraGrid, NAREGI, EGEE, and the United Kingdom's National Grid Service. This would be a serious application driver for both grid and data interoperability issues. When integrated with their Catalog Service for Web and Web Map/Feature/Coverage Services, we would enable a whole raft of geospatial applications on a scale not done before, including things like satellite ground systems. The goal is not just to do science, but to greatly enhance things like operational hurricane forecasting, location-based services and anything to do with putting data on a map.
Gt: What is your expectation of how OGF membership demographics might shift over the next few years, especially in terms of presence of end-users, IT managers, CIOs, vendors, developers, academics, research, industry, etc?
LEE: We definitely want to see more direct involvement by industry while preserving our historical constituency of research and academia. It’s certainly true that grid computing grew out of HPC efforts at national labs and universities, but the technologies developed are so fundamental and widely applicable that we have to make every effort to achieve consensus in the marketplace of ideas. In a sense, the fact that there are so many related activities by different groups that are looking at different parts of the elephant is a good problem to have Getting these different "tribes" to work together requires constant attention. To do this we must engage at every level; from CIOs that are making strategic corporate decisions, to technical project leaders that are where the rubber hits the road.
OGF is also in a unique position to align both world-class research and
technical expertise with industrial adoption
Ulf Dahlsten (director of Emerging Technologies and Infrastructures–Applications,
Directorate-General for Information Society of the European Commission) has a
briefing slide that illustrates the spectrum of technology development from research on one end to commercial products/services on the
other, with an "Innovation No Man's Land" in the middle. I firmly
believe that OGF's mission is to bridge that no man's land. To that end, we
need to ensure the right distribution in the OGF demographics.
Gt: Overall, what do you foresee as the top three issues and goals that will dominate the agenda during your term? What can members of the OGF, and the greater grid community, expect from the OGF during your tenure as president?
LEE: In general I will be pursuing several agenda items during my term as president, including:
Gt: Is there anything else you would like to say to our readers?
LEE: I’d certainly like to say that I am honored to be given this opportunity to serve OGF and the grid community. We have a great group of people, all dedicated to accelerating the adoption of grid technologies, but we still have a lot of work ahead of us. The success of OGF depends upon our “volunteer army” and I want encourage everyone to stay active and engaged.
I would also like to thank Mark Linesch for his tremendous help as I transition into my new role and for his excellent service to OGF for the last three years, which was a period of great change, challenge and opportunity for our community.
Lastly, I invite anyone that is interested in grid and the work of the OGF to come to our next event, OGF21, being held Oct. 15- 19 in Seattle. OGF21 will feature an exceptional technical program, workshops on software solutions and scientific applications, and an enterprise track focused on Grid use in IT datacenters. More information can be found on the OGF website at www.ogf.org.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.