November 06, 2012
Independent cloud computing analyst Cloud Spectator released a report today comparing Amazon EC2 cloud servers to that of nine other IaaS providers with regard to performance and stability.
Cloud Spectator's CloudSpecs Performance System uses industry-standard open-source benchmarks to measure virtual server performance. The Cloud Performance and Stability Report examined four aspects of the EC2 cloud: CPU, RAM, disk, and internal network (VPN connections).
Over a consecutive 30-day period, the CloudSpecs System performed a series of 19 tests on servers located within the Amazon EC2 East (1a) datacenter: 3 server system tests, 5 CPU tests, 2 disk tests, 2 RAM tests, and 7 VPN network tests.
Test Results (compared with industry average)
|CPU||2% above average||15% more stable than average|
|RAM||36% above average||10% more stable than average|
|Disk||68% below average||91% more stable than average|
|VPN Connection||49% below average||5% less stable than average|
Details from Cloud Spectator report
The performance of Amazon's virtual CPUs was on par with the IaaS industry standard, while RAM performance came in above average. VPN and disk performance, on the other hand, were below average.
As for stability, disk performance did very well, with virtual CPU and RAM coming in slightly above average. VPN was again lacking.
Overall, CPU and RAM were winners with above average performance and decent stability. On the flip side is Amazon's VPN; performance and stability were below average. The report authors say that this could help "explain performance issues with more complex, multi-server applications."
"There has been a lot of talk among the industry about Amazon EC2 and its stable performance," noted Kenny Li, founder and CEO at Cloud Spectator. "Our analysis validates that indeed there are inconsistencies in service performance. But more than that, we provide detailed data that shows the highs and lows of the service. This level of research will help not only Amazon, but companies who are evaluating Amazon. Inconsistency doesn't have to be bad, it depends on what you are using the service to accomplish."
Although it did so anonymously, Cloud Spectator signed up for an Amazon account using the same process as a typical AWS EC2 client. The CloudSpecs system automatically runs tests 4 times a day, 365 days a year, and this particular report was based on a 30-day consecutive period period from August 18, 2012, to September 16, 2012.
All tests were run on Amazon's EC2 Extra Large server, with the VPN connection test also using the EC2 Medium instance as a dummy server. Server benchmarks include the byte-Unixbench (http://code.google.com/p/byte-unixbench/) and Phoronix Test Suite (http://www.phoronix-test-suite.com/). The complete list of benchmarks and the significance of each is detailed in the full report.
To compile the industry averages, Cloud Spectator measured the performance of nine cloud IaaS providers over 30 days, using the same benchmark tests. Stability metrics were obtained by "calculating the standard deviation of the performance results of the CloudSpecs tests over a period of 30 days, also compared to an industry average."
In a separate report, also released today, Cloud Spectator compares the pricing structure of various IaaS offerings. The analysis is based on data from 20 cloud providers, including Amazon, Microsoft Azure, SoftLayer, PEER1 Zunicore, Rackspace, CloudSigma, GoGrid and others.
These reports come on the heels of another AWS outage. Although Amazon held up pretty well against Super Storm/Hurricane Sandy, the service suffered some downtime the week before. Github, Pinterest, Reddit and Netflix were among the high-profile companies that were affected.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.