November 15, 2004
CSC, the Finnish IT Center for Science, took part in one of the most important moves to bring together national supercomputing infrastructures to advance science and technology in Europe. Several leading European HPC centers have devised an innovative strategy to build a terascale supercomputing facility with continental scope, called Distributed European Infrastructure for Supercomputing Applications (DEISA). The resulting system will consist of more than 4,000 processors, a huge memory space and an aggregate computing power of over 22 teraflops.
The main objective of the project is to enable scientific discovery across a broad spectrum of science and technology, by the deployment and operation of a world class, distributed supercomputing environment. This becomes possible through a deep integration of existing national high-end platforms, tightly coupled by a dedicated network and supported by innovative system and Grid software. Strategies of coordinated operation have been identified and agreed, which will make the integrated infrastructure superior to the sum of its parts. The infrastructure will allow leading scientists across Europe to use the bundled supercomputing power and the related global data management facilities in a coherent, efficient and comfortable way.
This project started its activities in May with eight HPC centers from Finland, France, Germany, Italy, the United Kingdom and the Netherlands. The project is partially funded by the European Commission as part of a vigorous initiative aimed at deploying Grid-enabled, production quality research infrastructures in Europe. The project can expand horizontally by adding new systems, new architectures, and new partners thus increasing the capabilities and attractiveness of the infrastructure in a non-disruptive way. It will be open to collaboration with other Europe HPC centers and related initiatives world-wide
This integrated supercomputing power is intended to boost European competitiveness in those areas of science where extreme performance is needed. The provision of high-performance computing resources to researchers has traditionally been the objective and mission of the national HPC centers in Europe. However, the increasing competition between Europe, the United States and Japan is inducing growing demands for computational resources at the highest performance levels, as well as a need of fast innovation. To stay competitive, major investments are needed every two years -- an innovation cycle that is difficult to follow even for the most prosperous countries.
The supercomputing infrastructure fully exploits the network bandwidth provided by the European research network GEANT and the national research network It also relies heavily on the aggressive evolution planned for these and other European organizations.
"The concept of the distributed supercomputing infrastructure is based on the educated guess that network bandwidth will become, by the ends of this decade, a commodity very much like raw computing power became a commodity in the early 90s," said Victor Alessandrini from IDRIS-CNRS, director of the project.
"A tightly integrated European supercomputing environment is mandatory if we want to share the extreme computational resources that are needed for extreme efficiency and performance. This is the road that is being paved by DEISA," continued Alessandrini.
For further information, see DEISA the project Web site at www.deisa.org.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.