October 01, 2007
In Part 2 of our look at what has become of utility computing, we focus on two utility solutions that have redefined what the term means.
Since the idea of utility computing first came to be, the assumption by most was that it had to be an outsourced service, just like the energy utility from which it takes its name. As is prone to happen, however, utility computing has evolved into some forms -- including some located within a user’s own datacenter -- that bear little resemblance to their not-so-ancient ancestors. Companies like Cassatt and 3Tera are among a group of vendors who are out to show that as long as you have on-demand access to your resources -- when you need them and at the level you need them -- you have utility computing.
Turning Your Datacenter Into a Utility
No matter how appealing the prospect of having mission-critical applications and services run externally, and thus saving on the associated management costs, an outsourced model of utility computing just isn’t an option for some customers, particularly those with ultra-sensitive customer data. Luckily for them, Cassatt has taken it upon itself to give businesses the utility attributes they crave in-house.
According to director of product management and marketing Ken Oestreich, Cassatt’s definition of utility computing includes abilities around: pooling all datacenter resources (compute and network) in a software- and hardware-agnostic manner; controlling that pool with a set of user-defined policies; and metering of capacity, utilization and cost. (At Cassatt, said Oestreich, “It’s not a utility if it doesn’t have meter.”) Also inherent -- and, in fact, necessary -- in a utility model is capacity on-demand. “The negative definition of utility computing,” he elaborated, “is: You don’t have a separate capacity manager, and you don’t have a separate high availability manager. They’re part of the fabric or part of the inherent infrastructure.”
This vision has been realized in Collage, Cassatt’s “datacenter operating system” that was designed from the ground-up with these capabilities in mind. Among its more unique traits is the fact that Collage, unlike comparable solutions offering a “cloud” or “pool” of resources, does not rely on virtualization. In fact, said Oestreich, although it does support virtualization in a platform-agnostic manner, Collage doesn’t require customers wishing to move workloads to utilize virtualization at all. “We’re saying, ‘Whatever you have in your datacenter today, virtualized or not, whatever platform you have, you should be able to make that cloud.’”
This is accomplished, Oestreich explained, by capturing images of workload (from the operating system on up), removing the hardware-specific data and repopulating that data specific to the new machine. Even in this world of seemingly omnipresent virtualization, Oestreich said the folks behind Collage realized that users running certain data-sensitive applications, such as big scale-out databases, want to move workloads around as demand dictates without suffering the performance hits that come with running them on virtual machines.
Another big differentiator with Collage is the metering function, which takes many variables into account, thus enabling IT departments to bill other departments or lines of business on a per service basis versus a standard per usage or per machine basis. In addition, Collage’s meter also differs from many others in terms of monitoring. Whereas many metering tools are designed simply to display status and alerts, Oestreich says Collage allows users to monitor capacity on a datacenter-wide basis (against user-defined service levels), opening up the abilities to meter utilization and forecast capacity.
Cassatt currently has about a dozen customers, including in the financial services and government sectors, and is doing a good number of proofs-of-concept, and response has been overwhelmingly positive thus far, said Oestreich. That being said, he pointed out that while the technology generally wows potential customers -- to the point where the company is starting to develop starting points that allow customers who can’t digest it all at once to start slower -- organizational issues do arise on occasion. “The ‘gotcha,’ if you will, is there’s one thing utility computing offers that customers aren’t ready for,” explained Oestreich, “… which is because you’re pooling resources, because you’re sharing infrastructure, customer organizations aren’t always ready for it.” To deal with this issue, Cassatt has partnered with technology consultant BearingPoint, who helps customers ease the pain (if you can call it that) of consolidating their numerous silos, as well as the resources required to manage them.
Not surprisingly, even Cassatt acknowledges that its cutting-edge technology and unique take on utility computing are still in the early adoption phase. However, said Oestreich, the more people become comfortable with virtualization, the more they are becoming comfortable with concepts like automation and utility computing. While it might be a few years before technologies like Collage reach the current adoption level of virtualization, Oestreich said pioneers, as well as everyone with a big datacenter, already are asking about Cassatt’s utility vision.
Bringing a Utility Hosting Star In-House
In yet another example of how the notion of utility computing has evolved to encompass in-house versions as well as outsourced ones, 3Tera has decided to sell its previously-available-only-to-hosting-providers AppLogic software direct to end-users.
According to Bert Armijo, 3Tera vice president of marketing and product management, this notion came about after demonstrating the Super Grid with Layered Technologies (see www.gridtoday.com/grid/1758320.html), when several customers -- including banks, insurance companies and very large social networks -- that would not fit into the hosting customer demographic began inquiring about getting AppLogic in their datacenters. Although outsourcing computing isn’t too real a possibility for these customers, Armijo said their underlying needs are the same: to eliminate the annoyance and expense associated with deploying dedicated resources for every application a company runs.
Considering that, by Armijo’s estimate, a decent-sized IT company is likely running several thousand applications, the traditional model equates to tremendous labor and time expenditures maintaining these resources. AppLogic solves this problem by allowing users to define a set of infrastructure components -- CPU, firewall, load balancer, etc. -- for an application and then run it on a fairly generic set of resources. The big difference between AppLogic and other solutions, says Armijo, is that AppLogic users operate at an application level and can add or remove resources from an application as needed without dealing with the individual server level. In fact, users can move applications from one datacenter to another with a single command.
In the name of being a true utility, AppLogic does have provisions for SLAs by increasing the number of instances or the amount of resources to each instance, and the in-house version includes the ability to proactively provision resources to lines of business or individual operators, or to meter resources by application or account. Armijo sees AppLogic’s utility capabilities being far more relevant to the software’s business case than its grid computing capabilities, which also are touted in any product literature.
At 3Tera, he said, they refer to “grid” strictly from a hardware infrastructure perspective, because actual grid computing requires applications to be written in specific languages, using specific toolkits and operating systems, that aren’t conducive to running normal, transactional applications. Grid computing’s hardware set-up was married with virtualization to create AppLogic. “The traditional grid has been around for quite a while and showed the world, quite frankly, how you could, in fact, get to utility computing,” added Armijo.
At the end of the day, though, AppLogic is a very forward-thinking solution (with a user interface that resembles a Microsoft Visio drawing), and while the kind of clamor that led to the in-house release might have come a little earlier than anticipated, the company definitely expected there to be demand for it at some point. “If you think about what any large IT operation is facing today in terms of dealing with the sprawl of servers, [or] datacenter upgrades to pull in more power,” said Armijo, “this makes absolute sense and doesn’t surprise me in any way.”
To read Part 1 of this article, which discusses Sun Microsystems’ Network.com and Amazon’s Elastic Compute Cloud, click here.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.