March 17, 2011
Although there are a number of sites providing benchmarks for performance, pricing, and any other number of variables, sometimes there is nothing more succinct and helpful than pure user experience. While the needs of users are highly dependent on their intent, there are still a few universal elements that should be addressed during comparisons.
While there’s nothing wrong with the summaries of comparison points found here (with Rackspace thrown into the mix) and certainly those at CloudHarmony, which is one of the best sources to help users grasp how providers stack up—getting the story straight from use cases does provide an added layer of objectivity.
This week one the richest comparisons of GoGrid and Amazon from user experience emerged from Postgres Online Journal, the blog of small company with big computational needs that focuses on custom database and web application creation as well as prototype hosting. While their own requirements might not look much like those that would concern a user with HPC-type scenarios, they touch on every single important item that should be on anyone’s checklist for cloud providers.
The group takes a close look (and charts observations for easier comparison) at the two providers with particular emphasis on the following points: Number of public IPs, extent of support, ease of creating/handling images, instance configuration/pricing with Windows versus Linux, server shut-down policies, storage, build wizards and presence of trial plans.
The authors note that there is no one-size-fits-all nugget of advice since so much is dependent on any number of factors. Nonetheless, they do note, while it making it completely clear that there was no monetary or other incentive behind their statement, that for their particular needs GoGrid was the winner for their projects most of the time.
According to this group of users, who spent over a year working between both IaaS providers, GoGrid was selected as the ideal most often because they knew they needed a Windows server running all the time (it is explained in depth why this matters and how the differentiation pans out) and they preferred “the live email, phone, and personalized support they [GoGrid] offer free of charge.
Another reason the users cited was that with GoGrid, the group needed to have multiple public IPs per server since they had multiple SSL sites per server. “GoGrid starts you off with 16 public IPs you can distribute any way you like whereas Amazon is stingy with IPs and you basically only get one public per server unless we misunderstood.”
While their needs were well-suited by GoGrid most of the time, sometimes they prefer to experiment with different speeds and OS variations. In such cases Amazon EC2 was the best option since it’s possible to just turn off a server to avoid racking up more charges. With GoGrid, however, users are forced to delete the server versus just shutting it down.”
Pricing is one of the trickiest issues for cloud users to understand and use to perform simple price comparisons. As the one of the Postgres users noted, “Sometimes I think all cloud providers—and it’s probably true of most industries—are involved in a conspiracy scheme to confuse you with their pricing to get the most money out of you and ensure you can never exactly compare their pricing to any other cloud provider’s pricing.” This is in part because Amazon has its own extensive terminology and pricing methods whereas similar things (like instances, for instance) are approximately the same elsewhere but they’re called something different and infer slightly different things.”
To be fair, Amazon and GoGrid have both tried to simplify understanding how its pricing breaks down by providing calculators (follow links on the names to take them for a spin).
Again, this in-depth comparison is a must read for anyone evaluating cloud providers in general as the same approximate points of consideration are important.
Posted by Nicole Hemsoth - March 17, 2011 @ 11:58 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.