May 26, 2008
The problems: low server utilization and labor- and cost-intensive support of applications. The solution: virtualization ... or grid computing ... or maybe datacenter automation. Well, it's time to throw another approach into the mix -- one that could carve out a large following if it proves as effective as it is unique.
In the traditional application deployment approach, companies must exert lots of effort building a "golden image," making sure all the scripts, drivers, libraries, etc., are correct. The end result often is a multiple-gigabyte software stack that took days to weeks to perfect and, after testing, takes additional hours to deploy (likely on its own server, which explains the 5-10 percent average utilization rate). "Once you do that, you sort of lock it in time," says Jerry McLeod, vice president of marketing and business development for FastScale Technology. "You don't really want to mess with it because you could stop the application from running."
FastScale has addressed these issues with its Composer suite of products, which McLeod claims allows users to build lightweight application environments in minutes and provision and deploy them in seconds -- without worrying about golden images and gigabytes worth of software components. This is accomplished by putting all required software (including the CDs buried under papers in desk drawers) into a component repository, which breaks down the included files into the smallest-possible components. Composer then examines each file, figures out how it works, and places them into a database sorted by function (e.g., kernels, drivers, etc.). According to McLeod, FastScale's repository turns Red Hat's roughly 1,500 files into about 330,000 individual components. Once the accompanying software is in, users upload their applications into the repository. Composer then is able to discover all the hardware requirements for each machine in the datacenter, and through the process of application blueprinting finds the links between the application and the OS.
However, McLeod is quick to point out that nothing is built yet. When users want to deploy an application on a specific machine, they simply launch a Dynamic Application Bundle (DAB), which builds the entire software stack on demand and specific to that machine. Thanks to the componentization process, each DAB uses only the necessary pieces, leading to an application image about 99 percent smaller than a golden image. For example, cites McLeod, a traditional image of Apache running on Red Hat Enterprise Linux version 4 is 3GB, whereas the same application in DAB form will be 30MB.
In the event a situation arises requiring other or additional software components, the application still will have on-demand access to them via the repository. However, McLeod notes, this capability is best suited for the testing process, as many users prefer to harden the production-version application by turning off the link back to the repository.
So how does this solve the problems of low utilization and difficult administration? McLeod says the smaller images enabled by FastScale Composer mean less maintenance, fast testing, and utilization rates up to 90 percent. Reduced time to production also is a big draw, as applications and servers can be ready to go in minutes or hours. The longest part of the process, says McLeod, is uploading everything into the repository. A standard OS, he said, takes about 1.5 hours.
As for flexibility, McLeod says FastScale Composer allows for on-demand re-provisioning of servers, and enables any application to run on any machine. He added that Composer works well with third-party load balancers, schedulers and policy engines, and that DABs are small enough to run non-persistently in-memory, which means "all of my processing power is completely available for any application at any time."
"The idea of dynamic provisioning is to be able to minimize the compute pool of resources that you need in a datacenter and allocate resources on the fly as particular resources are needed, and at the same point, de-allocate resources when they're no longer needed," said Chris Wolf, senior analyst with Burton Group. FastScale allows for this functionality, Wolf said, which is a core theme of any vendor talking about the "dynamic datacenter."
But FastScale's Composer Suite does not end with physical machines; its Virtual Manager product extends the FastScale approach to virtual environments, as well. Essentially the same thing as Composer, but with VMs, McLeod says Virtual Manager actually allows for users to run significantly more VMs on a single machine than using VMware alone. Tests have shown a 4GB machine that normally could handle only 13 VMs was able to house 42 VMs by leveraging FastScale technology. All of this, says McLeod, can be accomplished without performance degradation, and because the DABs are running in-memory, all 42 can be re-booted faster than bringing up one traditional VM. Users can dynamically deploy and switch between physical and virtual machines, and can build and deploy VMs within seconds.
Another key feature of Virtual Manager, says McLeod, is the ability to build VMDKs. They can be application-specific or uniform, depending on user-defined policies, and are ready for action upon reboot, he added. Because a VMDK is just another VM, it will work with all other VMware tools. McLeod says that Virtual Manager was developed for VMware, but there are plans to make it work with Citrix XenServer and Microsoft Hyper-V, as well.
Aside from being a technology partner, VMware also is FastScale's biggest customer, said McLeod. However, he added, the company is gaining a lot of traction in the financial services space, where firms constantly are undertaking next-generation datacenter initiatives to increase their competitive edge. One banking customer currently evaluating FastScale has more than 1,300 different applications, and it plans to maximize them by replicating component databases and DABs to datacenters around the world.
Adding his two cents, Burton Group's Wolf says that FastScale's Composer Suite currently is ideal for development, testing and training -- especially in situations requiring quick repurposing of servers that require bare metal resources. Perhaps indirectly referencing the same financial customers mentioned by McLeod, Wolf added that "some of your more bleeding-edge IT shops that have been running virtualization in production for several years see products like FastScale as their next logical step" in furthering consolidation and automated server provisioning.
Wolf notes, though, that there are -- in some cases -- drawbacks to FastScale's "1.0"-level product. For one, such solutions must look at provisioning through the perspective of the entire data path, including networked storage allocation, network access, VLANs, switches, etc. "There's more to provisioning a server than just putting an image on a server and turning it on," he said. FastScale will need to improve in these areas eventually, he added, but the product as currently constituted should work alright in situations where networks and storage are configured loosely, and where users are swapping resources that already have been defined. In fact, he noted, this probably will be the typical FastScale deployment.
McLeod, however, is confident that FastScale's one-of-a-kind approach to solving datacenter woes eventually will win the day in the larger IT world. "We have a very unique and innovative approach to actually managing the software stack, which, really, nobody else deals with," says McLeod. "They either ... have a way of automating the way you do things today, which is moving these big golden images around, [and] people are doing consolidation based on putting virtual machines in, which we think is a great idea, but no one is actually dealing with how you manage, build and deploy your software stacks."
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.