September 24, 2007
Although it has been criticized by many analysts and experts who question its overall business model -- often wondering whether mainstream users would ever be comfortable running mission-critical jobs or services externally -- utility computing has not gone away.
In fact, if one were to ask around, he would very likely hear about a wide variety of utility solutions being offered by an equally varied group of vendors. If one were to dig a little deeper into these solutions, he would find that utility computing sure has matured since its early days of A offering B the ability to run jobs on A’s big collection of servers. Truth be told, “matured” might not be the right verb; it probably would be more accurate to say that, like so many technologies before it, utility computing has “evolved.”
And just like looking at the evolutionary paths of species whose current members have branched off into incarnations that barely (if at all) resemble their ancient ancestors, utility computing today takes many, sometimes unrecognizable, forms. However, unlike those species, the various incarnations of utility computing seek to do more than just survive -- they seek to transform the way the world does enterprise computing.
In this two-part series, we will take a look at four distinct utility models, which, although very different aesthetically, all aim to give users on-demand access to needed resources while easing the increasingly cumbersome task of datacenter management. To start, we examine Sun’s Network.com and Amazon’s Elastic Compute Cloud (EC2), two services that tackle external utility computing in two very unique ways …
Grid Computing for the Masses
Perhaps the most familiar-looking utility model we’ll discuss here, Sun Microsystems’ Network.com offers users the ability to run their compute-intensive applications on the Sun Grid, essentially a Sun Grid Engine-powered datacenter, for the firm price of $1/CPU/hour.
When it launched in March 2006, Network.com users were required to write their own applications to fit the Solaris-based grid, which they could then get up and running via a Web interface. Since then, however, Sun has been tweaking Network.com to make it more user-friendly, most noticeably by adding an application catalog featuring a variety of applications across a range of industries. Customers using these pre-configured applications simply submit their data and the application runs -- there is no need to write or rewrite code to specifically fit the Network.com infrastructure.
While this model has its fans, particularly among traditional HPC users like life sciences and modeling shops, Sun is looking for more users, which just might come thanks to an emerging couple of use cases. According to Mark Herring, director of marketing for Network.com, Sun is seeing increased interest from ISVs looking to leverage the grid’s resources to provide software services to third-party customers.
In some cases, such as with financials services ISV CDO2 and sales performance management leader Callidus, Network.com allows them a relatively inexpensive and simple way to get into the software-as-a-service market without having to host applications on their internal hardware. The formula is pretty simple: customers pay the software vendors to utilize their on-demand applications (generally via a Web portal), which actually are being run on Network.com. A similar model also is being utilized by data management company InfoSolve, who simply uses Network.com as its backend resource center. As of right now, every single data quality service InfoSolve runs for its customers is done on the Sun Grid. Although it is too early to tell, Herring believes this model could mean big business for Sun, as it offers an option for getting general computing ISVs and users on board in a highly transparent manner.
For Sun, though, its aspirations don’t end with a new usage model for Network.com; the company is searching for the elusive “killer app” that will do for on-demand computing what Google Maps did for Ajax. According to Herring, while the near-term goals for Network.com are to bring more applications into its catalog -- particularly in the life sciences area -- the company is hearing “murmurs and noise” suggesting there is a demand for the ability to run non-grid-enabled applications on the Network.com infrastructure, and Sun also is thinking about working development and office productivity tools into the fold.
The reason for this, said Herring, lies in the presumption that what we call “utility” today is “going to take more and more of the lion’s share of computing, period.” Sun doesn’t believe that a one-size-fits-all approach to the utility market will suffice, so now that Network.com has grid under its belt, it can start looking at other models, such as more general hosting, storage farms and Google-type software applications. Although Herring can’t elaborate on details, he noted that some of these potential services are currently being demoed internally.
“We definitely don’t look at Network.com and say, ‘Hey, we’re done here. We’ve solved utility computing,’” said Herring. “We’ve solved a piece of it, [but] there’s a lot more pieces and I think the only thing that creates a complete solution is to have each one of those use cases taken care of.”
Bare Metal, Web Services and ‘Elasticity’
Of all the utility services being offered today -- outsourced or in-house, virtualized or physical -- the one with the most buzz surrounding it has to be Amazon’s Elastic Compute Cloud (EC2). Developed initially to relieve Amazon’s many internal teams of the various “heavy lifting” tasks necessary to launch the company’s software services -- tasks that consumed time and money that could have been spent delivering actual business value -- the company eventually realized that the utility infrastructure in which it invested so much money could deliver real value outside of Amazon, as well.
What separates EC2 from its competitors, said Amazon CTO Werner Vogels, is that EC2, as well as its sister S3 (Simple Storage Service), is designed for developers and relies on Web services. To get started, a developer: (1) selects an Amazon Image Service (AMI), Xen-enabled Linux images ranging from standard Red Hat images to specialized images with Hadoop parallel computing or specific grid services built in, or constructs his own; (2) communicates with EC2 via Web services calls to determine how many of the AMIs to start, get the AMIs instantiated, and figure out the IP addresses and other virtual machine specs; and (3) configures security around the AMIs, deciding who can access which services. “You can do computation or you can offer a service to the outside world,” said Vogels. “Whatever you do inside these environments is all up to you.”
EC2 also sets itself apart by providing low-level services, or what Vogels calls “infrastructure-level” services, that offer access to as close to the bare metal as possible. When compared to Network.com, for example, a users AMIs in EC2 would be analogous to the physical infrastructure that comprises the Sun Grid. The big difference, however, is that EC2 users can run whatever services they want within that “infrastructure,” grid or not. According to Vogels, when combined with the dynamic nature of EC2, this freedom of services is one of the solution’s biggest draws.
As evidence of just how wide open the platform is when a little creativity is applied to it, Vogels can rattle off an expansive list of EC2 use cases, which includes, among others: grid or parallel computing; Web 2.0; testing and integration; third-party rendering; search engines; and Web crawlers. However, he said, some markets, such as those that have traditionally utilized grid or HPC technologies, move faster than others. In the case at hand, Vogels said users are finding that EC2 is not too big a step from running and managing their parallel and/or distributed datacenters.
Despite its fundamental differences from Network.com, though, the two
solutions do have something in common: both are proving popular with
“traditional software houses” that want to break into the software-as-a-service
(SaaS) market. Although the companies see the potential of SaaS, said Vogels,
they often have little operational experience beyond running their own Web
sites, and they almost certainly have no experience running large-scale
datacenters out of which they have to offer services. Just like for Amazon’s
internal teams, EC2 allows these companies to minimize their datacenter
management issues and focus on their core strengths around software
In addition, as noted earlier, EC2 also is sharing in the burgeoning Web 2.0
market. Just like with SaaS customers, Vogels said EC2 offers Web 2.0 firms a
prime opportunity to focus on the important issues. “… [EC2] allows them to
focus their scarce resources -- in this case, finances -- on actually acquiring
talent instead of acquiring computer servers,” commented Vogels. Because these
companies often only get one shot at success, he added, it is crucial that they
can prepare themselves for success without making huge upfront investments in
With such a wide breadth of uses, one probably shouldn’t be surprised to learn that EC2, which currently is in a limited beta mode, is experiencing “almost unlimited demand,” with presently available resources continuously in use and a long line of interested customers. From Vogels’ point of view, this demand should only continue to grow, as users really like on-demand resources and love paying only for actual usage. A low barrier to entry doesn’t hurt, either.
Citing Wall Street firms and government agencies as real-life examples, Vogels said many of EC2’s heaviest users came on board just to experiment and ended up getting hooked. “[A]s long as you’re a developer with a credit card,” Vogels summarized, “you can do this.”
Be sure to watch for next week’s issue, where we will present two solutions that turn utility computing on its head by bringing this traditionally external practice in-house.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.