January 17, 2011
In nearly every research discipline, the number of scientific instruments available to add to the stream of data input has been climbing. While this has spurred any number of software developments in recent years, without adequate hardware processing capabilities to handle the delgue, there can be no match for the possibilities that lie in the incoming data.
Accordingly, a number of research institutions are findings new ways to handle the data deluge, both in terms of reinventing grid-based paradigms and looking to cloud computing models to extend already stretched computational resources.
Astronomy is one of several areas that is suffering from the glut of data brought about by more streamlined, complex, and numerous instruments and not surprisingly, researchers are looking to grid and cloud models to handle the well of data.
Researchers Nicholas Ball and David Schade discussed the concept of astroinformatics in detail, stating that, “in the past two decades, astronomy has gone from being starved for data to being flooded by it. This onslaught has now reached the stage where the exploitation of these data has become a named discipline in its own right…This naming follows in analogy from the already established fields of bio- and geoinformatics, which contain their own journals and funding.”
Canada’s astronomy community is, like other nations with advanced astronomy research programs, looking for ways to approach their big data problem in an innovative way that combines elements of both grid and cloud computing. Their efforts could reshape current views of astroinformatics processing and help the country move toward its goals of becoming a global center for advancements in astronomical research.
The Canadian Advanced Network for Astronomical Research (CANFAR) is behind an ongoing project in conjunction with CANARIE (a national research network organization) to create a cloud-based platform to support astronomy research. The effort is being led by researchers at the University of Victoria in British Columbia in conjunction with the Canadian Astronomy Data Centre (CADC) and with participation from 11 other Canadian universities.
The goal of the project is to “leverage customized virtual compute and storage clouds, providing astronomers with access to many datasets and resources previously constrained by their local hardware environment.”
The CANFAR platform will take advantage of CANARIE’s high-speed network and a number of open source and proprietary cloud and grid computing tools to allow the country’s astronomy researchers to better handle the vast datasets that are being generated by global observatories. It will also be propelled by the storage and compute capabilities from Compute Canada in addition to the expertise from the Herzberg Institute of Astrophysics and the National Research Council of Canada.
CANFAR is driven forward by a number of objectives to support its mission to create a “global machine” that will help researchers further their astronomy goals. The creators of the project stated, “All of the necessary components exist to support science but they don’t work well together in that mission. The type of service layer that is needed to support a high level of integration of these components for astronomy does not exist and needs to be invented, installed, and operated”
What CANFAR Can Do
The value proposition of CANFAR is that it will enable astronomers to process the data from astronomical surveys using a wide array of custom software packages and, of course, to widen the set of computational resources available for these purposes.
A report on the project described CANFAR as “an operational system for the delivery, processing, storage, analysis, and distribution of very large astronomical datasets” and as a project that pulls together a number of Canadian entities, including the Canadian National Research Network (CANARIE), Compute Canada’s extensive grid and storage capabilities, and the CADC data center to create a “unified storage and processing system.”
The report also describes the CANFAR project’s technical details, stating that it has “combined the best features of the grid and cloud processing models by providing a self-configuring virtual cluster deployed on multiple cloud clusters” that takes elements from grid-based services as well as a number of cloud services, including “Condor, Nimbus or OpenNebula, Eucalyptus or Amazon EC2, Xen, VOSpace, UWS, SSO, CDP and GMS.”
The researchers behind the CANFAR project noted that when considering different virtualization options, they considered both Xen and KVM, but settled on Xen because of its wider popularity at the time and because it was the only one that facility operators had used on an experimental basis in the past.
On the scheduler front, there were complexities because the CANFAR virtual cluster needed a batch job processing system that would provide the functionality of a grid cluster, thus making both Grid Engine and Condor natural options. The team settled on Condor, however, because upon examination of the environment, they found that using Grid Engine would mean that they would have to modify the cluster configuration anytime a VM was added or removed.
The team selected Nimbus as the “glue between cloud clusters” which “examined the workload in the Condor queue and used resources from multiple cloud clusters to create a virtual cluster suitable for the current workload” and used the Nimbus toolkit as the primary cloud technology behind the cloud scheduler.
The team also developed support for openNebula, Eucalyptus and Ec2, but decided on Nimbus because it was open source and permitted the “cloud workload to be intermixed with conventional batch jobs unlike other systems. “ The research team behind CANFAR stated that they believed “that this flexibility makes the deployment more attractive to facility operators.”
With Linux as the operating system and an emphasis on interoperability and open source, CANFAR will be a proving ground for the use of these scheduling and cloud-based management tools on large datasets. In addition to other projects that make use of similar (although diverse in terms of packages used) interoperability and open source paradigms like NASA’s Nebula cloud, there will likely be a number of exciting proof of concept reports that will emerge over the course of the next year.
CANARIE’s vision for the project is that it will also “provide astronomers with novel and more immediate hands-on and interactive ways to process and share very large amounts of data emerging from space exploration.”
In addition to helping research better manage the incredible amounts of data filtering in from collection sites, the project’s goals are also tied to aiding collaboration opportunities among geographically dispersed scientists.
As the CANFAR team noted, “a schematic of contemporary astronomy research shows that the system is essentially a networked global array of infrastructure with scientists and telescopes as I/O devices.”
Slides describing some of the current research challenges and potential benefits as well as some of the context for the project can be found here.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.