October 06, 2011
CloudSleuth, a resource from application performance player, Compuware, has released the results of a year-long study of the fastest response times among a number of major cloud service providers.
The independent tests, which were carried out using 30 testing nodes around the world to monitor performance once every fifteen minutes. To save you the calculator-pull, that is 515,000 tests overall for a year from August 2010 to July 2010. Each of these tests involved loading a simulated retail shopping site of two pages, one page loading 40 item descriptions and small JPEG images, the second pulling open a sample image of 1.75 MB.
The winner, in a rather hands-down sort of way, was Microsoft Azure, which beat out competitors Amazon EC2, Google’s App Engine, Rackspace and a handful of others.
As ArsTechnica noted in a recent analysis of the test:
“The Windows Azure data center in Chicago completed the test in an average time of 6,072 milliseconds (a little over six seconds), compared to 6.45 seconds for second-place Google App Engine. Both improved steadily throughout the year, with Azure dipping to 5.52 seconds in July and Google to 5.97 seconds. Also scoring below 7 seconds for the whole year were the Virginia locations of OpSource and GoGrid along with BlueLock in Indiana. Rackspace in Texas posted an average time of 7.19 seconds, while Amazon EC2 in Virginia posted a nearly identical 7.20. Amazon’s California location scored 8.11 seconds on average.”
As Jon Brodkin stated, however, the tests have a number of potential weak points. For instance, as he wrote today, “Although Compuware tries to make the tests expansive by spreading nodes throughout the world, the results are still highly affected by location. For example, both Azure and Amazon posted poor scores in their Singapore data centers (16.10 seconds for Azure and 20.96 milliseconds for Amazon, the worst time in the survey) but the discrepancies between North America and Asia are due in large part to limitations in the Compuware testing network.”
Brodin went on to note that “Within Asia, the performance is generally abysmal by North American standards,” says CloudSleuth product manager Lloyd Bloom. But the measurements are skewed because “most of our measurement points are not in Asia.”
While the cloud needs for a retail shopping site versus an HPC application are about as closely tied as night and day in many cases—especially given the relatively small compute and storage demands involved here—it nonetheless points to general speed for projects like testing and development, an area that many HPC users attempt to outsource to free up their clusters for bigger crunching and much higher dependence on high performance, low-latency networks.
Full story at ArsTechnica
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.