September 21, 2011
This week at the EGI Technical Forum in Lyon, France I spent some time talking with a number of organizations that back comprehensive national grid computing projects in Europe. While some of the grid efforts are established, relatively newer organizations, including the Swiss National Grid Association (SwiNG) are finding new, innovative ways to boost their country’s competitiveness via distributed resource management and sharing.
In addition to providing practical support and services for scientific research projects in Europe, these groups collectively represent a range of methods for forming national infrastructure sharing systems that grant equal weight to the sustainability, functionality and ultimate use of these resources. In other words, looking to some relative newcomers (like SwiNG) might provide a solid use case for those considering the viability of setting up a localized grid infrastructure.
I sat down with Dr. Sergio Maffioletti, Project Director of the Grid Computing Competence Center (GC3) at the University of Zurich and SwiNG executive board member to discuss the goals and progress toward creating a national grid infrastructure. The group’s mission is to ensure the competitiveness of Swiss science, education and industry by creating value through shared resources. Moreover, they aim to establish and coordinate a sustainable infrastructure and provide the necessary platform for interdisciplinary collaboration.
Maffiolletti described in detail the consortium of 19 Swiss universities that were gathered, starting back in 2008. The impetus to put this effort together was born out of the need to better channel the efforts of Swiss grid activities, much of which are focused on the needs of Switzerland’s high energy physics community (at CERN and other sites).
The other side of the mission behind SwING was to do a better job of enabling research communities to make use of sophisticated distributed resources—according to the SwiNG representative, this was a challenge on a number of fronts, including authentication and general usability or ease of access.
The video below provides some context for the types of tools and innovations required for building such an initiative from the base framework of the ARC middleware. It also gives an update on where the project stands since its inception in 2008.
Maffioletti also spent some time describing the management framework necessary to create broad grid initiatives. While it is a bit complex on first glance, the following graphic from SwiNG provides an suitable overview of the layers of management needed to support a wide network of distributed resources.
At the event this week there were a number of other national grid initiative bodies, some from countries that we often do not hear a great deal about. For instance, representatives were on site from the Polish Infrastructure for Information Science Support in the European Research Space (PL-Grid). This network of distributed systems works to support the needs of biology, quantum chemistry, physics and simulation needs of the Polish research community via a number of platforms that have been developed or refined for researchers across Poland.
Clouds, grid computing, and other ways of opening collaboration, access, and research opportunities are at the top of the agenda this week in Lyon—more coming on other interesting approaches to solving distributed computing challenges as the next few days unfold.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.