February 21, 2011
Currently, HPC has been making a transition to the cloud computing paradigm shift. Many HPC users are porting their applications to cloud platforms. Cloud provides some major benefits, including scalability, elasticity, the illusion of infinite resources, hardware virtualization, and a “pay-as-you-go” pricing model. These benefits seem very attractive not only for general business tasks, but also for HPC applications when compared with setting up and managing dedicated clusters. However, how far these benefits pay off in terms of the performance of HPC applications is still a question.
We recently had the experience of porting an HPC application, Numerical Generation of Synthetic Seismograms, onto Microsoft’s Windows Azure cloud and have generated some opinions to share about some of the challenges ahead for HPC in the cloud.
Numerical generation of synthetic seismogram is an HPC application that generates seismic waves in three dimensional complex geological media by explicitly solving the seismic wave-equation using numerical techniques such as finite-difference, finite-element, and spectral-element methods. The computation of this application is loosely-coupled and the datasets require massive storage. Real-time processing is a critical feature for synthetic seismogram.
When executing such an application on the traditional supercomputers, the submitted jobs often wait for a few minutes or even hours to be scheduled. Although a dedicated computing cluster might be able to make a nearly real-time response, it is not elastic, which means that the response time may vary significantly when the number of service requests changes dramatically.
Given these challenges and due to the elastic nature of the cloud computing, this seems like an ideal solution for our application, which provides much faster response times and the ability to scale up and down according to the requests.
We have ported our synthetic seismogram application to Microsoft’s Windows Azure. As one of the top competing cloud service providers, Azure provides Platform as a service (PaaS) architecture, where users can manage their applications and the execution environments but do not need to control the underlying infrastructure such as networks, servers, operating system, and storage. This helps the developers focus on the applications rather than manage the cloud infrastructures.
Some useful features Windows Azure for HPC provides for applications include the automatic load balancing and checkpointing. Azure divides its storage abstractions into partitions and provides automatic load balancing of partitions across their servers. Azure monitors the usage pattern of the partitions and servers and adjusts the grouping or splitting of workload among the servers.
Checkpointing is implemented using progress tables, which support restarting previously failed jobs. These store the intermediate persistent state of a long-running job and record the progress of each step. When there is failure, we can look at the progress table and resume from the failover. The progress table is useful when a compute node fails and its job is taken over by another compute node.
Challenges Ahead for HPC in the Cloud
The overall performance of our application on Azure cloud is good when compared to the clusters in terms of the execution time and storage cost. However, there are still many challenges for cloud computing, specifically, for Windows Azure.
Dynamic scalability - The first and foremost problem with Azure is that the scalability is not up to the expectation. In our application, dynamic scalability is a major feature. Dynamic scalability means that according to the response time of the user queries, the compute nodes are scaled up and down dynamically by the application. We set the threshold response time to be 2 milliseconds for queries. If the response time of a query exceeds the threshold, it will request to allocate an additional compute node to cope up with the busy queries. But the allocation of a compute node may take more than 10 minutes. Due to such a delay, the newly allocated compute node cannot handle the busy queries in time.
These scheduling delays are real concern, which leads to the need of effective and dynamic load management system in order to react in time to the changes of the HPC application requirements. In the other direction, the application scales down the compute nodes if some compute nodes do not have any user queries. The de-allocation of compute nodes on Azure is an asynchronous process. It means that Azure randomly picks one of the compute nodes and de-allocates it. So, the application cannot process the user queries until the de-allocation process is complete, which may slow down the performance.
Low-level control to optimize performance - We did not have good control over the compute nodes. Our application reads user query, splits the job into sub-jobs among compute nodes. Each compute node requests for a set of data from the storage depending on its sub-jobs. If the next user’s query is the same, will the previous set of data be reused? Or will the same process be executed thoroughly again, i.e., request the same set of data from the storage again and do the computation? If each request to the storage takes a latency delay of 15 milliseconds, then will it cost the same latency again?
Even if we move the data from the cloud storage to local storage provided by the compute node to prevent latency delay, there is no guarantee that the next user’s query will be serviced by the same compute node. Because of lacking of low-level control, it is difficult to fully exploit the compute node capacity and maximize the data locality.
Multi-tenancy - We are not sure about how far the compute nodes are dedicated for the application. As one of the major features of cloud, multi-tenancy is an issue for the application. Multi-tenancy means sharing of the compute nodes among multiple applications. As the number of applications running on the same compute node increases, it will reduce the amount of bandwidth allocated to each application. This might lead to performance degradation over time.
Reliability and fault-tolerance - Reliability is also another concern. On Windows Azure, it is still unknown that how long it takes to replace a failed compute node with a new one. Additionally, it is unclear how hardware failures impact on the performance of the application. These impacts are needed to be studied and taken into consideration while developing the load management system. On PaaS architecture, one of disadvantages is that testing the application against fault tolerance and compute node failures are quite difficult.
Debugging and profiling - Although Windows Azure programs can be developed and debugged locally, Azure’s architecture does not support remote debugging. This might be a problem to develop and deploy complex applications on Azure. Parallel and remote debugging has always been a problem for developing HPC programs. It will be a new issue on cloud computing. Efficient error detection tools, including tracing and replaying, should be provided by cloud computing vendors. Like the traditional HPC platforms, light-weight profiling tools will be very useful for analyzing and tuning performance, which are still missed for the most current cloud computing platforms.
Thus far, we have tried to pinpoint some of the challenges ahead for HPC in the cloud, specifically, Windows Azure. Windows Azure provides a black box architecture which lacks of flexibility to optimize the performance. Some low-level controls are needed for the HPC users to improve the performance of their applications. Though these challenges are based on Azure, they are also applicable for general cloud computing platforms.
About the Authors
Vedaprakash Subramanian is a Master student in the department of Computer Science at University of Wyoming. He received his Bachelor’s degree in Electrical and Electronics at PSG College of Technology, India in 2009. His research focus is in utilizing cloud platform for HPC application and, HPC program reliability, and performance optimization. He is currently working on porting applications for computational seismology to cloud platforms such as Azure and Amazon EC2.
Hongyi Ma is a PhD student in the Department of Computer Science at University of Wyoming. He got his Bachelor's degree in Computer Science at University of Science and Technology of China, Hefei, China in 2010. His research includes HPC, Programming Errors Detecting.
Liqiang Wang is currently an Assistant Professor in the Department of Computer Science at the University of Wyoming. He received the BS degree in mathematics from Hebei Normal University, China, in 1995, the MS degree in computer science from Sichuan University, China, in 1998, and the PhD degree in computer science from Stony Brook University in 2006. His research interest is the design and analysis of parallel computing systems including cloud computing.
En-Jui Lee is currently a PhD student in the Department of Geology and Geophysics at the University of Wyoming. He received the BS degree in Earth Sciences from National Cheng Kung University, Taiwan, in 2003, and the MS degree in Geological Sciences from SUNY Binghamton University in 2009. His research interest is in computational seismology.
Po Chen is currently an Assistant Professor in the Department of Geology and Geophysics at the University of Wyoming. He received the BS degree in Geophysics from Peking University, China, in 2000, and the PhD degree in Geological Sciences from University of Southern California in 2005. His research interest is in computational seismology.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.