May 05, 2010
It's census time and the galaxy is not exempt.
If you've been following any news at all in the astronomy field, you're likely already aware of the massive project undertaken by the European Space Agency called Gaia, which seeks to create a galactic map that shows far more than the present composition of our galaxy. According to the European Space Agency (ESA), the project "is an ambitious mission to chart a three-dimensional map of our galaxy, the Milky Way, and in the process, reveal the composition, formation and evolution of the galaxy. Gaia will provide unprecedented positional and radial velocity measurements with the accuracies needed to produce a stereoscopic and kinematic census of about one billion stars in our galaxy and throughout the local group. This amounts to about 1 percent of the galactic stellar population."
If your mind is not sufficiently blown by the very concept of Gaia's aims (let alone current state of progress, which you can read about in detail), consider this -- Gaia's infrequent but immense demands for mission-critical data processing created the prime opportunity for one of the most convincing proof of concept measures for cloud and HPC. This test of the possibility for cloud to effectively handle such demands was tackled by cloud infrastructure and development giants, The Server Labs and RightScale.
The Server Labs led a recent feasibility study to test the limits of Amazon's EC2/S3 as it ran data-intensive scientific applications in the cloud. The project executed a distributed astrometric process developed for the Gaia mission to show the world how cloud computing could prove to be a cost-effective solution for HPC applications.
This feasibility study set out to display the possibilities of running complex scientific applications in the cloud. Since the demands of the project were not constant and processing of the massive amounts of data was only undertaken on a semi-regular basis, the cloud proved to be the most appealing host -- a fact that allowed the Server Labs along with RightScale to demonstrate how the cloud could be deployed to serve as a cost and resource-saving measure that would prevent the European Space Agency from having to construct its own specialized center to handle the occasional heavy-duty processing demands.
To get at the heart of some of the inherent challenges, surprises, benefits, and trouble spots involved with the viability study for running HPC applications in the cloud, it was necessary to go directly to the source and ask some of the key players to offer their impressions of the success of the study to prove the capability for HPC to operate in the cloud while allowing all of the quintessential benefits -- most notably cost savings and efficiency. Along the way, it became clear that this is some of the most promising news on the HPC and cloud front that's come along in several weeks. The sheer scale of the data processing, the approximate value of the overall saving of resources -- monetary and otherwise, and the ability to migrate to the cloud are all signs that HPC in the cloud has a chance to catch on in mainstream HPC -- soon anyway.
According to Paul Parsons, CTO and Chief Architect at The Server Labs and Alfonso Olias, Senior Consultant with Server Labs, the challenges inherent to the ESA's Gaia project presented the perfect opportunity to test the viability of cloud in an HPC context.
"After the launch of the Gaia satellite the project required some complex astrometric data processing to be executed every six months. This type of non-constant processing lends itself to the cloud. The study intended to prove that the processing can be run in Amazon EC2 at a much lower cost, which would enable the European Space agency to delay or avoid the purchase of in-house hardware to do the job. We are currently undertaking a second feasibility study to compare the performance of Oracle with Amazon S3 for read-only data storage and to evaluate if the system can scale out to 1000 high CPU EC2 nodes, each of which have 8 cores. The European Space Agency will be using the cloud to do some pre-launch testing."
As one might imagine, since this was something of an experiment, there were, of course, some initial challenges, but these yielded some pleasant surprises as well, including the migration process -- which is so often cited as one of the initial barriers enterprise and HPC users consider as they weigh the costs, benefits, and overall value for running their HPC applications in the cloud. As Paul Parsons noted,
We first set out to evaluate if the astrometric processing could be run in the cloud at all. The subsequent aims were to identify the architectural challenges and to assess the financial impact of running the Gaia project's HPC data processing in the cloud. The surprise for us was that the process of migrating to the cloud was relatively painless. The architecture did not need to be changed at all, proving we had designed a well-architected loosely-coupled system.
Aside from concerns about migration, the other critical factor in considerations about running HPC applications in the cloud is, quite simply, performance. While this is going to be an issue until technology, capability and capacity are more aligned, the project did provide a few cues that overall performance does not need to be a barrier as there are workarounds that can suffice until more progress is made to improve performance in HPC and the cloud. As Parsons and Olias stated of their experiences,
Traditionally HPC has not been a good candidate for cloud computing due to its requirement for tight integration between server nodes via low-latency interconnects. The performance overhead associated with virtualization, a prerequisite technology for migrating local applications to the cloud, hits scalability and efficiency in an HPC context. High-speed networking is also a critical requirement for HPC as clusters of servers and storage need to be able to communicate as fast as possible between them. This will change in the future as cloud providers launch products more apt for HPC. However, as we proved in the Gaia project, the possibility of provisioning more nodes than would be possible in an in-house cluster provides us a means to circumvent these barriers to a certain extent.
Many HPC customers have a high investment in technologies such as MPI and InfiniBand, and we therefore believe that allowing MPI and providing high-speed networking in the cloud are critical requirements. Is cloud scalable for petascale computing and beyond? Yes. But is cloud ready for high-speed networking? The Infiniband performance gap is increasing but improvements are also being made in the 10 GigE area so we will have to wait to see how public cloud providers such as Amazon's EC2 take on the challenge.
It is hard to deny that this proof of concept task did what it set out to do -- it proved that there are sustainable uses for HPC in the cloud for very large data-intensive applications and furthermore, that with more advances in technology, the questions about whether HPC and cloud are aligned will begin to dwindle.
By using cloud-based technologies scientists and engineers can have on-demand access to large distributed infrastructures and completely customize their execution environment. Cloud computing provides the ability to scale up and down the computing infrastructure according to the requirements at any given time. Although cloud technologies are enough for distributed computing, they do not cope with all HPC applications that have tighter constraints. Basically it will depend on the demands of HPC customers and whether the industry is willing to offer a competitive solution. A lot of effort is being made in this arena by cloud service providers. Cloud allows economies of scale and pay per use so expect an evolution of cloud in order to meet HPC requirements.
There are many benefits to cloud; no upfront costs, pay as you go billing model, virtually infinite computing resources, etc. As energy prices will increase in forthcoming years power consumption becomes another important issue for HPC clouds. As the clouds become bigger, economies of scale will allow for lower energy costs as compared to small in-house clusters. Some large HPC customers we talked to recently are very interested in looking at the cloud because, as they pointed out, they are not in the business of making datacenters.
Cloud computing can be a cost-effective solution for many HPC applications. Think about the opportunity cost between building your own datacenter or deploying and running an application in the cloud within minutes. Cloud computing provides flexibility, elasticity and the illusion of infinite computing resources. As technology matures we will see more HPC applications moving to the cloud.
The overall conclusion that the team from The Server Labs came up with is that there is a bright future for HPC in the cloud -- but that future is still somewhat out of reach for mainstream HPC. The benefits might be clear, but until there are more proof of concept projects like theirs, this future where HPC is ideal for the cloud can still rightfully be considered in the more distant future -- how distant depends, of course, like all other things in HPC and cloud (or both together) on further research, development, and efforts to prove what seems viable in theory -- cost and resource savings without dramatic reductions in performance.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.