October 01, 2007
On Sept. 27, IBM and Objectivity sponsored a webinar called "Unleashing the Power of Grid." For anyone thinking about implementing a grid, or anyone just interested in learning more about grid-enabled databases, this webinar should be very informative.
In this webinar, Clifford Spinac, senior enablement architect at IBM, provides an overview of grid computing, from its beginnings in the supercomputing world to its current use in enterprise. He also discusses the value of grid, outlines the differences between a grid and a cluster, and explains how to enable an application for a grid infrastructure. Following Spinac was Leon Guzenda, chief technology officer and founding member of Objectivity, who explains how the Objectivity database is well-suited to grid computing.
Spinac defines grid as follows: "Grid computing uses open standards and protocols to enable the virtualization of distributed computing resources to create a single system image across heterogeneous, geographically dispersed IT environments."
Giving a little background, Spinac covers formation of the Global Grid Forum and its open standards-based Open Grid Services Architecture (OGSA), which provides the basic capabilities of grid: infrastructure services, execution management services, data services, resource management services, security management and information services. From these OGSA services came an open standards platform for grid called the Globus Project.
Spinac goes on to examine the business case for grid. For ISVs, there are a number of advantages to enabling applications for grid: grid will allow the optimization of business processes and performance so that customers get the maximum capabilities of the application; it will allow the integration of applications in an on demand environment; it allows for more efficient use of heterogenous resources, and the application can be designed for many different environments; and it enables service level agreements to be met.
For the end-user clients or other customers, grid enables cross-enterprise data access, integration and collaboration. It allows them to accelerate the speed of results and time to market through improved asset utilization, which in turn helps to increase operating efficiency and reduce costs. It allows for the integration of stove-piped, or isolated, heterogeneous resources. And grids can strengthen reliability, responsiveness and resiliency since they provide good failover support and high availability.
Spinac looks at four phases that reflect the history of grid. In phase one, grid was born to meet the needs of the distributed supercomputing world, where it was primarily used by the scientific community. In phase two, grid started to gain traction and standards started to take effect, enabling academia to become more involved. In phase three, grid standards were improved and grid-enabled applications started to become more available. This is when grid entered the commercial enterprise sector. Phase 4 is where we are currently. We are starting to see advantages of new advanced grid standards and grid-enabled applications -- grid is becoming an integral part of computing environments.
Various industries are adopting grid and bringing into place mainstream applications. Some of the industries and their applications are: business analytics (banking, financial, life sciences, telecommunications); engineering and design (automotive, aerospace, petroleum); research and development (automotive, aerospace, life sciences, higher education, government); enterprise optimization (banking, financial markets, automotive, life sciences, petroleum, government, telecommunications); and government development (government, public sector).
As most of us know, grids are not clusters. Spinac illustrates the key differences. While both clusters and grids parallelize jobs, grids do so by virtualizing the resources that are in the hardware (memory, CPU and storage) as needed by the application. The resources are allocated to the applications on an as-needed basis by the grid. While clusters tend to use dedicated systems and usually are homogeneous, grids are heterogeneous and resources are available on an as-needed basis; they are made available as the load on the grid becomes heavier. Clusters are generally administered and controlled from one location, while grids are usually managed individually and the system owner maintains control over each member system, or node, on the grid. With clusters, special software is written to parallelize the operations; parallelizing is more a function of the application code. With grids, the applications stand alone and can run with no changes. The grid infrastructure handles the work of parallelizing the requests for more work. With clusters, the applications have to be aware of these requests.
Spinac also tackles the meaning of grid-enablement, which he describes as "taking advantage of the virtualized grid infrastructure and using it to accelerate processesing time or to increase collaboration." In the future, applications will run as OGSA-compliant Web services taking advantage of other services provided by an on demand operating environment.
The second speaker, Leon Guzenda provides an overview of Objectivity/DB and shows how it ties in with grid computing. "Objectivity/DB is a distributed database, built to leverage a distributed processing environment," said Guzenda. "It can support complex data-intensive tasks running in a grid environment and provide a single logical view of a federation of databases."
Some of the markets where Objectivity/DB can be found are data-intensive science, defense and security, and high-tech manufacturing. All these markets share two common requirements: managing complex data and very large databases and high throughput, heavily computational systems.
Objectivity has been working with grids since the early '90s, starting with the scientific community with both data grids and large clusters.
Objecivity/DB has been certified by IBM with the highest level of grid-enablement -- Level 6. This ensures that Objectivity/DB can run in a grid-enabled, service-oriented architecture environment. "Objectivity/DB is particularly well-suited to a service-oriented architecture deployment, as the data and the query servers can be located close to the physical resources they need to access, or they can be clustered on high-performance servers that have high-bandwidth access to the physical data," said Guzenda.
Guzenda explains that there is no need for a centralized server because the parallel query engine can be customized to optimize the way that search tasks are performed in the grid.
"The same application that can run on a laptop, a workstation, a server cluster or a service-oriented architecture environment can run unchanged in a grid environment," states Guzada.
A question and answer session rounded out the webinar. One notable question was: "How does grid maximize performance of mission-critical applications?"
Spinac explained that with grid, they are virtualizing the whole environment in a way that makes the best use of resources in an as-needed and as-available basis. When applications are not fully utilizing the grid, the nodes can be used for other applications and purposes, so they aren't being tied up by the grid. And if any node goes down, it doesn't matter because the provisioning of the resources is dynamic and virtualized. Any grid node that is available can be assigned work. Grid allows them to maximize availability and utilization of all the different resources.
Guzenda followed up by saying that in the scientific environment a lot of the data is partially replicated, so that when the scientists are working on the data, they don't have to go back across the network to the original location. The optimization of storage location and resources is very flexible in a grid environment, making it easier to manage.
To view the webinar for yourself, go to www.objectivity.com/gridwebinar.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.