March 29, 2011
Most often when one thinks about thousands who donate their spare capacity to create a distributed supercomputer we think of a project like SETI@Home or other such grid-inspired movements.
This same concept is taking shape in the commercial context as more companies begin to find ways to distribute the power of unused compute capacity to users hungry for cycles but short on cash.
Many have predicted that there will be a proliferation of such services in coming years, including Krishnan Subramanian, who sees 2011 as the breakout year for such services. In Subramanian's view, “This takeoff can be attributed to many traditional webhosts wanting to gain foothold in a cloud based world and with the emergence of a federated cloud ecosystem, smaller cloud players will get a channel to monetize their unused capacity. Users will also gain more confidence in using this model to achieve cost savings. Expect to see government agencies coming out with a similar model for their own consumption.”
Enomaly, a Canadian virtualization and cloud software company has been on the forefront of this new wave of resource-sharing. The company is currently testing the market for users who want to buy and sell excess computing resources, thus allowing owners of spare capacity to avoid idle machines and those who need the power an expensive way to get it.
While the “clearinghouse” for cycles, called SpotCloud, is currently in public beta, founder and CEO of Enomaly (the parent company for this service) Reuven Cohen claims that it is gaining some serious traction, both among providers and users, some of which are running data intensive workloads.
Cohen claims that HPC users are far from the majority during the current phase of SpotCloud’s beta but that rendering and transcoding—borderline HPC operations, depending on how you look at it—are some of the prime use cases for such a service. He notes significant traction there as well as, more predictably, with load testing and general testing and development.Cohen claims that for both of these user types the costs would otherwise still be prohibitively high using Amazon, thus the success in these two arenas.
During his interview with HPC in the Cloud, Cohen discussed at length how HPC users might make use of the SpotCloud resource when there are a number of missing elements; the most obvious of which is the lack of visibility into the resources one is getting and a sense of the performance—or at least an estimate of what one can expect. He noted that while all applications would be written in the same way if they were going to the public cloud (thus solving some of the hardware opacity issue) the matter of performance and predictability is still being address.
Cohen stated that in the very near future there will be some announcements surrounding independent audit benchmarks for providers. This is especially important as more resources are being made available to users, some of whom will look at the cost, others at the price, while others think about latency, bandwidth and related metrics.
Providing the platform, both on a technological and logistical basis isn’t much of a stretch for Enomaly’s founder, who gathered a number of valuable lessons along the road, particularly in terms of catering to the needs of users and providers alike.
Enomaly was one of the trailblazers in the early days of cloud computing, being among the first smaller outfits to provide Infrastructure as a Service (IaaS). As their customer base expanded, Cohen started hearing a number of specific requests from end users, including the desire to have more fine-grained control over data processing location and from the provider side, the constant issue of increasing resource utilization.
Cohen claims that on this platform, in which those with spare capacity set their own pricing and conditions (i.e. when and how long they will be “open” to running SpotCloud user jobs) the costs are significantly lower than on a public cloud resource like Amazon. He estimates the cost of his service being a very small fraction of what Amazon users would pay, or as he put it, prices that break down to having 100 machines through SpotCloud for every one machine you could get through Amazon.
While the pricing not look like Amazon, the process behind securing the resources quickly does to some extent. For those that have extra capacity to lend, software must be installed in order to be added to the ranks of resource providers, but buyers need only fill out the requisite information and find a suitable match for their needs based on cost location, etc. In many ways, the process of selecting the resources isn’t much different than Amazon, even if the names and “instance types” differ.
The big differentiator here is certainly pricing. Cohen was careful to note that pricing varies since it is based on the provider’s requirements. For instance, the pricing might be far lower at night than during the middle of the afternoon for selected resources since it’s based on the utilization in that hidden datacenter feeding users… Yes, hidden.
For now there’s no transparency for users to see where their resources are coming in order to protect providers, but Cohen plans on making this an option for providers down the road since some of the larger ones (think Terremark, AT&T and the like) would like the added benefit of brand recognition.
This privacy clause is something of a double-edged sword, of course. Users, especially with workloads that are HPC-flavored, want to know more than just the environment—they want to know where and how is handling the workload. When renting infrastructure from a big provider like Amazon, GoGrid or Rackspace (to name a few) the user might not know where the data is residing exactly but they do know who is handling it—thus providing a layer of accountability under an SLA.
While these supplier versus consumer demands might seem to be disconnected on first sight, Cohen saw an opportunity that hadn’t been seized on in a meaningful way to date—building on these two desires meant he could create a marketplace where datacenters could remain humming and still churn a profit (however minimal) and users could instantly tap into remote resources at a cost that would be lower than other options and with minute control over where their data went.
The benefits for those with excess capacity to sell are certainly clear; since resource utilization is among the top gripes of datacenter operators and others with a case of server sprawl, this might offer a solution to make some use of hardware that is otherwise sitting idle.
The benefits are also especially clear for a certain class of users as well—those who need to have fine-tuned control over where their data resides. Since the marketplace pulls in resources from around the world and allows users to sort by city or region, the worry about regulatory or compliance issues around geography are solved.
Coming from the world of IaaS has allowed SpotCloud a certain advantage over startups that might try to create their own marketplaces. Enomaly got its start in a number of countries that were among the first in their region to offer cloud-based services and wanted to find a way to help them maximize their utilization, which meant that there was already worldwide distribution of Enomaly’s existing platform. As he was able to pull more server farms into the SpotCloud/Enomaly fold, he was better able to present a rich geographical selection for end users. At this point, users can select not only the country or region but also the city that their workloads go to because of this “foothold” early on.
This is something that only a “clearinghouse” for spare capacity could do since it wouldn’t make economic sense for any large-scale provider to build out in several big cities just so they could offer users local resources—and this could be the one of the most compelling features for a number of users hesitant about many cloud providers’ inability to tell them what country (let alone region) their data resides in.
In some ways this seems like it might not be mature enough in its beta phase to suit the needs of some high-performance computing workloads, but it will be interesting to see, once the opacity issue is solved, how this idea takes off. Now that the cloud is gaining wider acceptance and generating a little less suspicion all around, perhaps the time is right to look at different (and cheaper) ways of delivering remote resources to users desperate for affordable cycles…. Something tells me they’re out there.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.