January 21, 2011
CERN is the largest particle physics laboratory in the world and the birth place of the World Wide Web. One of the most important physics experiments performed at CERN takes place at the LHC accelerator, which generates 15 Petabytes of experiment data each year and requires 100k CPU cores to be processed. These huge computing needs has made CERN a reference for building and operating large-scale computing facilities and a leader in IT research.
As part of this innovation tradition, CERN has always been on the cutting edge in terms of exploring new paradigms to improve their computing infrastructure.
The popularization of cloud computing in 2006 (along with the maturation of virtualization technologies) was a movement that researchers at CERN could not overlook, especially given their complex computational needs that demanded they get all they could from their infrastructure. In 2008, the CERN IT Department started the lxcloud project to explore the Infrastructure as a Service (IaaS) model.
The initial goal of the project was to directly apply virtualization to CERN’s batch-computing services in an effort to obtain its classical benefits (e.g. consolidation, ease of system administration or server-service decoupling).
The use of virtualized computing servers would also allow CERN’s IT Department to provide a richer variety of computing environments to their scientists. However, as anyone with even a rough understanding of the complexities of virtualization in practical context understands, trying to virtualize tens of thousands of computing servers is not an easy task.
From the beginning of CERN’s virtualization project, it was clear that a scalable and robust infrastructure that supports the movement and distribution of virtual machine images to hundreds of servers and to provide the virtual servers with basic services like networking or access to shared file systems would be the paramount goal. Moreover, it was crucial to find a management layer able to deal with a large number of virtual servers and to integrate all the storage, networking, virtualization and system management services of the infrastructure.
Although there are a number of competing software packages that researchers could consider for this project, OpenNebula was a clear candidate for the cloud management layer because of the integration capabilities that would be needed to leverage the CERN’s infrastructure services; robust and scalable architecture.
Furthermore, CERN needed a toolset that could fulfill the requirements of managing thousands of virtualized servers and a vast variety of API and interfaces, all of which were essential to further integrate the system with CERN’s procedures and usage patterns.
Significant progress has been made in the implementation of the CERN IaaS cloud. In spring 2010 about 480 servers were used to build a cloud prototype, and tested to over provision approximately 16,000 virtual computing servers. The experiments in this phase help us to discover the limitations of the infrastructure, allowing us to push the boundaries further.
At the same time these stress tests also helped to figure out some limitations of OpenNebula which have now been fixed in the 2.0 release. This post (http://blog.opennebula.org/?p=620) provides a fairly technical overview of lxcloud, its use of OpenNebula (ONE), and the cloud we are building at CERN.
After over a year of prototyping and development, the lxcloud project has finally entered production status with 48 virtual computing servers managed by OpenNebula. These virtual servers run production jobs that analyze real data collected by the LHC accelerator. So far “OpenNebula has proven stable, scalable and easy to develop." said Dr. Ulrich Schwickerath of the batch virtualization team at CERN.
The OpenNebula community has also seen some benefits on the heels of this first year of joint efforts between CERN and the OpenNebula developers. Just to mention a few outcomes now available to the OpenNebula community: a more robust and scalable management core, support for new storage and database backends, an enhanced placement and scheduling module or new image management tools (scp-wave & scp-tsunami) for large scale facilities.
This is not the end of the story. CERN hopes to move this initial private cloud to larger scale to add flexibility to its IT processes as well as offer a public cloud interface. CERN has successfully tested the EC2 API implementation of OpenNebula. New challenges in cloud computing management will be addressed.
About the Author
Rubén S. Montero, PhD is an associate professor in the Department of Computer Architecture and Systems Engineering at Complutense University of Madrid. In the past, he has held several visiting positions at ICASE (NASA Langley Research Center, VA). Over the last years, he has published more than 70 scientific papers in the field of High-Performance Parallel and Distributed Computing, and contributed to more than 20 research and development programmes. He is also heavily involved in organizing the Spanish e-science infrastructure as a member of the infrastructure expert panel of the national e-science initiative.
His research interests lie mainly in resource provisioning models for distributed systems, in particular: Grid resource management and scheduling, distributed management of virtual machines and cloud computing, where he is especially interested in the inter-operation of cloud infrastructures. He is also actively involved in several open source grid initiatives like the Globus Toolkit and the GridWay metascheduler, where he coordinated the technical activities of the project till 2008. Currently, he is co-leading the research and development activities of OpenNebula, a distributed virtual machine manager.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.