February 19, 2007
The Open Grid Forum (OGF) recently held its Second Workshop on Reliability and Robustness in Grid Computing Systems at OGF19 in Chapel Hill, NC on January 31, 2007. The workshop organized through the eScience OGF function, brought together researchers and engineers actively working on Grid computing systems with the goal of promoting better understanding of reliability issues and requirements. The focus of this workshop was on strategies and techniques for promoting grid systems reliability.
An important area of interest for the workshop was reliable, fault-tolerant Grid system architectures. Presentations were given on a commercial Grid system architecture in which a fail-over strategy is used to ensure service availability and reliability, a Grid system design for monitoring and dynamic reconfiguration in response to component failure, and a fault-tolerant management architecture for web and OGSA services with scalable overhead costs.
Another area of focus was the impact on Grid reliability of interactions among Grid services. Work was reported on how interconnections between software components emerge to form clusters around key hub components and the potential impact of this phenomenon on grid COTS reliability. Recommendations were also presented on strategies for enhancing reliability of OGSA implementations in the face of complex service interactions. Copies of presentations are available at http://gridreliability.nist.gov/.
Workshop participants made suggestions on how to promote Grid system reliability, such as developing accompanying guidelines for specifications to help facilitate reliable implementations and steps to ensure specifications do not inadvertently lead to unreliable implementations. The results of the workshop will be incorporated into the production of an OGF informational document scheduled for publication at the end of this year. The document is intended to serve as a resource to improve standard OGF and Web Service specifications and to enhance the reliability of industrial grid implementations. The OGF Reliability and Robustness Research Group is actively soliciting participants to contribute to this effort.
Please contact Christopher Dabrowski email@example.com for further information.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.