In 2001, David De Roure, Nick Jennings and Nigel Shadbolt introduced
the notion of the Semantic Grid, which advocated "the application of
Semantic Web technologies both on and in the Grid." From the
requirements derived from the diverse set of U.K. e-Science
applications, De Roure, Jennings and Shadbolt identified a need for
maximum reuse of software, services, information and knowledge.
Although the basic Grid middleware originally was conceived for hiding
the heterogeneity of distributed computing, the authors contended that
users now required "interoperability across time as well as space" to
cope with both anticipated and unanticipated reuse of services,
information and knowledge.
In a new paper, the same authors have revisited the projects of the
U.K. e-Science program three years out from their original analysis to
examine if their expectations have been realized. They now see the
e-Science requirements as a spectrum, with one end characterized by
automation, virtual organizations of services and the digital world,
and the other end characterized by interaction, virtual organizations
of people and the physical world.
From experience with projects such as myGrid and CombeChem, they have abstracted 12 key requirements for the Semantic Grid:
- Resource description, discovery and use.
- Process description and enactment.
- Autonomic behavior.
- Security and trust.
- Information integration.
- Synchronous information streams and fusion.
- Context-aware decision support.
- Support for communities.
- Smart environments.
- Ease of configuration and deployment.
- Integration with legacy IT systems.
They also identify five key technologies that are being used to address
these requirements in some of the U.K. e-Science projects:
- Web services.
- Software agents.
- Ontologies and reasoning.
- Semantic Web services.
Let us look at what have these two projects achieved so far.
The myGrid e-Science project (www.mygrid.org.uk
is researching high-level middleware to support personalized in silico
experiments in biology. These in silico experiments use databases and
computational analysis rather than laboratory investigations to test
hypotheses. In myGrid, the emphasis is on data intensive experiments
that combine the use of applications and database queries. The
biologist user is helped to create complex workflows with which they
can interact and that can also interact with workflows of other
researchers. Intermediate workflows and data are kept, notes and
thoughts recorded, and different experiments linked together to form a
network of evidence as is currently done in bench laboratory notebooks.
The computer scientists and biologists in the project have together
developed a detailed set of scenarios for investigation of the genetics
of Graves' disease, an immune disorder causing hyperthyroidism, and of
Williams-Beuren syndrome, a gene deletion disorder that affects
multiple human systems and also causes mental retardation. To implement
its ideas, the project has built a prototype electronic workbench based
on Web Services. They have identified four categories of service:
- External third party services such as databases, computational analyses and simulations, wrapped as Web services.
- Services for forming and executing experiments such as
workflows, information management and distributed database query
- Services for supporting the e-Science methodology such as provenance and notification.
- Semantic services, such as service registries, ontologies and
ontology management, that enable the user to discover services and
workflows and to manage several different types of metadata.
Some, or all, of these services are then used to support applications and build application services.
The project has developed a suite of ontologies (roughly speaking,
agreed vocabularies of terms or concepts) to represent metadata
associated with the different middleware services. Semantic Web
technologies such as DAML+OIL and standards body W3C's Web ontology
language, OWL, then allow the prototype myGrid workbench to operate,
interoperate and reason over these services intelligently. The project
has demonstrated the potential of such an approach to in silico
bioinformatics experiments and is now attempting to produce more robust
semantic components that will allow users to personalize their own
The CombeChem project (www.CombeChem.org
has the ambitious goal of creating a "Smart Laboratory" for Chemistry
using technologies for automation, semantics and Grid computing. A key
driver for the project is the fact that large volumes of new chemical
data are being created by new high throughput technologies such as
combinatorial chemistry, in which large numbers of new chemical
compounds are synthesized simultaneously. The need for assistance in
organizing, annotating and searching this data is becoming acute. The
multidisciplinary CombeChem team has, therefore, developed a prototype
Smart Laboratory test-bed that integrates chemical structure-property
data resources with a Grid-based computing environment.
The project has explored automated procedures for finding similarities
in solid-state crystal structures across families of compounds and
evaluated new statistical design concepts to improve the efficiency of
combinatorial experiments in the search for new enzymes and
pharmaceutical salts for improved drug delivery. One of the key
concepts of the CombeChem project is "Publication@Source" by which
there is a complete end-to-end connection between the results obtained
at the laboratory bench and the final published analyses. In a sister
project called eBank, raw crystallographic data is annotated with
metadata and "published" by archiving in the U.K. National Data Store
as a "Crystallographic e-Print." Publications can then be linked back
to the raw data for other researchers to access.
In another strand, computer scientists in the SmartTea project have
worked with the Combechem team to develop an innovative human-centered
system that captures the process of a chemistry experiment from plan to
execution. They have used an analysis of the process of making tea in a
laboratory to develop an electronic lab book replacement.
Using tablet PCs, the system has been successfully tested in a
synthetic organic chemistry laboratory and linked to a flexible
back-end storage system. A key finding was that users needed to feel in
control, and this necessitated a high degree of flexibility in the lab
book user interface. The computer scientists on the team investigated
the representation and storage of human-scale experiment metadata and
introduced an ontology to describe the record of an experiment and a
novel storage system for the data from the electronic lab book.
In the same way that the interfaces needed to be flexible to cope with
whatever chemists wished to record, the back end solutions also needed
to be similarly flexible to store any metadata that might be created.
Their storage system was based on Semantic Web technologies such as RDF
(Resource Description Framework) and Web services. This system was
found to give a much higher degree of flexibility to the type of
metadata that can be stored compared to traditional relational
Although much of the focus of the Grid community is currently on low
level middleware, it is important not to lose sight of the significant
research challenges for computer scientists to develop high level,
intelligent middleware services. These services must genuinely support
the needs of scientists and allow them to routinely construct secure
Virtual Organizations and to automate the management of the many
Petabytes of scientific data that will be generated in the next few
years in many areas of science. The Semantic Grid is not yet a reality,
but the U.K. e-Science projects are providing a valuable test-bed for
Semantic Web technologies.
© Tony Hey 2005