August 04, 2011
Ioan Raicu, an assistant professor in the Department of Computer Science at the Illinois Institute of Technology (IIT) and guest research faculty in the Math and Computer Science Division at Argonne National Laboratory, has a long-standing interest in the challenges of data-intensive computing and distributed system.
As founder and director of the Data-Intensive Distributed Systems Laboratory (DataSys) at IIT some of the problems he has been tackling have similar threads that run across cloud computing, exascale computing and the new programming and efficiency challenges of manycore processors.
Raicu compared current supercomputing capacity and that which will fuel the coming age of exascale to major cloud computing providers like Amazon.
In doing so, he made the claim that Amazon in 2018 will look very similar to exascale supercomputers, with node counts in the many hundreds of thousands.
Currently Amazon's data centers are spread out across six locations with an estimated 40,000 servers, 320,000 cores, and consuming an estimated $12 million per year in energy costs. Raicu claims that this already parallels the systems at major institutions and that by 2018, Amazon's revenue, which is a mere $250 million per year, could grow anywhere from 100 to 1000 times what they are now.
This growth comes at a cost, however. While Amazon is spending an estimated $12 million per year on energy costs alone, by the time it hits the exascale level Raicu predicts, energy costs could soar to as much as $20 million per year.
During his talk, Raicu pointed out a number of ways that the challenges of exascale can be directly related to the problems that the major IaaS vendors like Amazon will face. Among such expected hurdles is, perhaps not surprisingly, the energy efficiency issue he discussed during his yearly energy expenditure estimates. With talk that exascale systems will likely require their own dedicated power plants, what would a set of distributed data centers housing many hundreds of thousands of nodes require?
Raicu argues that we need to look to more power efficient technologies that will not only aid in the progress toward exascale computing—but that can also be harnessed to power the growing mega-clouds. Even without solutions to the efficiency problem, there are other bottlenecks, including the usual suspects when it comes to major data centers or supercomputers---memory and storage.
Even with the efficiency and hardware problems solved, there need to be applications that can take advantage of the vast numbers of cores available. For exascale, this is challenging enough—but when it comes to a distributed computing powerhouse like Amazon, operating at such scale, solving parallel programming challenges is going to be just as important, if not in some ways more complex.
To better understand the context of some of his statements, check out the video of the talk presented below.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.