August 12, 2010
Cloud computing has taken internet companies by storm by allowing them to quickly and cheaply scale infrastructure with demand. HPC users, however, have been more wary of moving their computation into the cloud. Large scientific applications have different computational needs versus those of web companies. The concerns are well known. For example, while many large web companies serve very large scale request loads, the computation typically requires no large scale coordination between components. Sifting large amounts of data becomes the challenge to scale.
In contrast, many scientific applications can require significant coordination between large numbers of nodes. Massive effort is spent in the HPC arena to hide much of the latency from coordination with expensive low-latency networks and fine-tuned communication libraries. Such efforts have not yet been translated to the commercial cloud computing arena, which still typically provides systems with varying amounts of installed memory but rarely various quality interconnects.
After experimenting with HPC applications in a commercial cloud environment for several years, we have experience with a major concern for HPC in the cloud: multitenancy. While virtualization itself has been shown repeatedly to have little effect on in-core HPC computation, network performance can suffer when going through the hypervisor. We have noticed a further complication: these experiments often assume no competition for the resources from other tenants.
Commercial cloud providers do not yet provide any such guarantees against sharing nodes. Indeed, putting several customers on the same node, called multitenancy, is a cornerstone to the cloud provider's revenue model. Multitenancy can have serious effects for HPC performance. We have seen performance degrade over time as resources become oversubscribed even on simple in-core computations.
Figure 1 above shows the execution time of a repeated in-core matrix-matrix multiply (DGEMM, Level 3 BLAS) using all 8 cores of a single cloud node over 6 hours. We use DGEMM because it is the building block for the rest of the BLAS library, which is the most widely used API for linear algebra. Many HPC applications use some form of DGEMM at the core of their computation so the performance of this simple operation is indicative of the performance of these applications.
The fastest execution time of the DGEMM over the 6 hours (which occurs between 20:00 and 20:30) is similar to that on a typical HPC cluster node. However, the average execution time on our cloud node is more than 8 times worse with a standard deviation of 33%. The hardware is good, as shown by the best execution time, but the competition among tenants results in diminished average performance with a wide range of possible outcomes. Thus, the expected performance of a simple in-memory matrix-matrix multiply on a multitenant cloud node is not good and fluctuates significantly. Without even using the network, the cloud nodes still cannot be expected to perform as a typical HPC cluster due to the competition from other tenants.
Since virtualization itself has been shown to have little effect on in-core computations, it is clear that competition for resources is the likely culprit, but competition for which resources? Our DGEMM experiments were configured to fit all data in memory so that other I/O didn't affect the results. Although the data in the figures did not entirely fit within the last-level cache of the architecture, we have performed other experiments with smaller data sets that show similar behavior. We conjecture that competition for space in the shared last-level cache plays a significant role although we haven't ruled out all other possible causes.
To explore the effects of multitenancy, we reduced the contention on the node by underutilizing the available cores. In Fig 2 below we show the average and minimum execution times of the same repeated in-core DGEMM using different numbers of cores. While the minimum tracks the expected execution on a HPC cluster node, the average performs poorly. The best performance is achieved by using only a 1/4 of the node. In contrast to the benefits of using multiple cores for parallelism in a cluster, our shared node suffers from attempting much parallelism. Since using 2 threads still helps, we hazard a guess that there are at least 2-3 other tenants sharing the node with us.
Of course HPC applications don't use a single node. The effects of sharing nodes can be extended to large jobs. We ran experiments with the LINPACK benchmark, to see the effects across multiple node jobs. HPL computes the solution of a random, dense system of linear equations via LU factorization with partial pivoting and two triangular solves. HPL also provides a favorable ratio of data movement to computation with n^3 computations over n^2 data to minimize the disadvantages of a slow network. We wanted to see if the run-of-the-mill network performance from a cloud provider, which falls far short of the high performance interconnects of a supercomputer, overwhelmed the performance effects of node sharing. If so, cloud providers could focus on improving network performance and isolating different customer types to improve the HPC customer's experience. If not, then we wanted to identify to what degree the performance could be improved by underutilizing the cores on all of the nodes just as we did with a single node.
This is a simple approach that can be implemented by the customer and doesn't require the cloud provider to change any infrastructure. Ironically, although you use less of the allocated nodes, you can still save money if computation completes faster even when using fewer of the cores.
Our experience is that the benefits of underutilizing the nodes does extend to cluster jobs with network coordination. The execution time of HPL can be reduced by 2/3rds by using only half of the cores of the nodes of even a 3 or 4 node cluster. The effects of slow network performance grow with the size of the cluster so we would expect even bigger gains from larger clusters. Inversely, we also conjecture that a faster network would increase the benefits of underutilization by reducing delay due to synchronization between the nodes. Underutilizing the nodes appears to provide shorter and much more stable execution times with significantly less fluctuation, which should provide for shorter delays from synchronization even without network effects.
Although HPC customers have always been primarily interested in raw performance, HPC in the cloud introduces the significant aspect of cost. In Fig 3 above we show a scatter plot of the performance versus cost results for different underutilization rates and cluster sizes for a particular instance type on Amazon's EC2 (costs were as of March 2010). In this figure all points represent the same computation. The lines connect the same number of nodes in a cluster with different levels of underutilization. The bottom left corner is both the fastest and cheapest computation. The comparison between different lines demonstrates the tradeoff of adding more cores per node -- moving down the line -- or adding extra nodes -- jumping to a different line. Although using 4 nodes isn't strictly necessary because the computation fits in the memory of only 3 nodes, the graph shows that adding an extra node helps the computation finish earlier but at the same cost!
Although performance and cost are strictly linked in the cloud environment, the performance is affected by competition from unknown other tenants sharing the nodes. Both performance and cost are thus difficult to estimate and change dynamically. The previous figures show significant fluctuation in execution time resulting in corresponding fluctuations in cost for the computation. Although we can use average execution time and costs for estimates, the large standard deviation makes precise estimates difficult. We believe cloud providers must deliver HPC customers more stable performance to allow for better cost modeling.
HPC has demanding computational needs. In our experience, current commercial offerings don't quite meet those needs. However, there are paths forward. Amazon has announced the immediate availability of Cluster Compute Instances that are designed for the HPC market. Our experience with these clusters is still limited, but they currently perform similarly to normal HPC clusters that do not have supercomputer interconnects. It remains to be seen whether this performance will hold at some point in the future when the Cluster Compute Instances are oversubscribed. In this multitenant case, you might even be better off sharing with unknown tenants on generic instances than with other HPC users.
In addition to different instance types, we believe cloud providers targeting HPC users should use QoS agreements that provide for a specified, stable expected performance over time with small variability. Although exclusive access to nodes is unlikely in a commercial offering, online performance measurements could enforce such contracts. This approach will implicitly require cloud providers to limit the degree of multitenancy and possibly new techniques for isolating the performance effects between VMs sharing a node. However, many HPC applications demand stable performance due to significant coordination between nodes in a large cluster.
About the Authors
Jeff Napper is a Systems Engineer for Hyves, a Dutch social networking website, and a Software Consultant on issues related to distributed systems. The work in this article was done while working at VU University, Amsterdam on the XtreemOS grid OS project sponsored by the European Union.
Prof. Paolo Bientinesi is a Junior Professor in Computer Science at RWTH Aachen University, Germany. He leads a team at AICES, conducting research in the areas of Numerical Linear Algebra, High-Performance Computing, and Automation. In 2009 he received the Karl Arnold Prize from the North Rhine-Westphalian Academy of Sciences and Humanities for outstanding research work of a young scientist.
Roman is currently a Fellow at the Aachen Institute for Advanced Study in Computational Engineering Science (AICES) at RWTH Aachen University, Germany. He was previously a Software Engineer at SoftServe, Inc. and an Assistant Researcher at the Ivan Franko National University of Lviv, Ukraine.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.