September 21, 2011
This week at the EGI Technical Forum in Lyon, France I spent some time talking with a number of organizations that back comprehensive national grid computing projects in Europe. While some of the grid efforts are established, relatively newer organizations, including the Swiss National Grid Association (SwiNG) are finding new, innovative ways to boost their country’s competitiveness via distributed resource management and sharing.
In addition to providing practical support and services for scientific research projects in Europe, these groups collectively represent a range of methods for forming national infrastructure sharing systems that grant equal weight to the sustainability, functionality and ultimate use of these resources. In other words, looking to some relative newcomers (like SwiNG) might provide a solid use case for those considering the viability of setting up a localized grid infrastructure.
I sat down with Dr. Sergio Maffioletti, Project Director of the Grid Computing Competence Center (GC3) at the University of Zurich and SwiNG executive board member to discuss the goals and progress toward creating a national grid infrastructure. The group’s mission is to ensure the competitiveness of Swiss science, education and industry by creating value through shared resources. Moreover, they aim to establish and coordinate a sustainable infrastructure and provide the necessary platform for interdisciplinary collaboration.
Maffiolletti described in detail the consortium of 19 Swiss universities that were gathered, starting back in 2008. The impetus to put this effort together was born out of the need to better channel the efforts of Swiss grid activities, much of which are focused on the needs of Switzerland’s high energy physics community (at CERN and other sites).
The other side of the mission behind SwING was to do a better job of enabling research communities to make use of sophisticated distributed resources—according to the SwiNG representative, this was a challenge on a number of fronts, including authentication and general usability or ease of access.
The video below provides some context for the types of tools and innovations required for building such an initiative from the base framework of the ARC middleware. It also gives an update on where the project stands since its inception in 2008.
Maffioletti also spent some time describing the management framework necessary to create broad grid initiatives. While it is a bit complex on first glance, the following graphic from SwiNG provides an suitable overview of the layers of management needed to support a wide network of distributed resources.
At the event this week there were a number of other national grid initiative bodies, some from countries that we often do not hear a great deal about. For instance, representatives were on site from the Polish Infrastructure for Information Science Support in the European Research Space (PL-Grid). This network of distributed systems works to support the needs of biology, quantum chemistry, physics and simulation needs of the Polish research community via a number of platforms that have been developed or refined for researchers across Poland.
Clouds, grid computing, and other ways of opening collaboration, access, and research opportunities are at the top of the agenda this week in Lyon—more coming on other interesting approaches to solving distributed computing challenges as the next few days unfold.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.