December 01, 2008
When you provide infrastructure as a service to companies providing software as a service, your infrastructure really must be able to respond to any demand. The 3 a.m. call will be for you.
BlueLock is in just that business, offering cloud computing through an infrastructure-as-a-service model. The company provides all the physical IT (servers, storage networking, security), plus round-the-clock management, monitoring, support and disaster recovery. Clients pay a predictable monthly fee for the virtual resources they use, plus a retainer to keep some virtual machines in a holding tank. If demand increases, BlueLock taps into that reserve capacity and charges the user accordingly.
“We essentially put virtual machines into a suspended state, wake them up and drop them into the pool when the customer needs them, then put them back to sleep when demand backs down,” says BlueLock CTO Pat O’Day. “The cost has favored our model because people get a warm site at the cost of a cold site.”
Most of BlueLock’s customers are SaaS providers, O’Day says, with Web-facing applications, classic three-tier applications, or client-server applications they want to have the properties of a Web-based app. About 80 percent of the business is production, and the rest disaster recovery, he says.
BlueLock describes itself as “a full-service cloud provider.” According to O’Day, that includes not only virtual clouds, but also backups, offsite storage, co-location, data escrow, managed services and SAN-to-SAN replication. (“We can take your app as it exists in one datacenter and within 30 minutes have it live in the other” is how he describes the latter). The company also has an SLA in terms of uptime and resolution time. The resolution time is 15 minutes, says O’Day, which means “problem resolved” not “We’ll have a conversation about it in 15 minutes.” Aside from virtual infrastructure, BlueLock will give you physical machines, as well, and let you mix your physical servers with their virtual ones.
The company’s two datacenters, in Indianapolis and Salt Lake City, are stocked like this: HP’s C7000 blade platform for servers (dual quad-core, 32GB RAM), running VMware Enterprise as the OS; Cisco wire-speed switching infrastructure for the core network; and LeftHand’s SANiQ running on HP DL320s hardware for the SAN. But the key to BlueLock being able to scale up and down seamlessly without disruption is “the architecture we’ve been able to build using VMware virtualization and the BIG-IP Local Traffic Manager from F5 [Networks],” O’Day says.
F5 designs products for accelerating application delivery. Its BIG-IP series of modular devices can be configured to provide everything from load balancing to Web acceleration. BlueLock uses BIG-IP LTM for load balancing and SSL offload for its entire virtual infrastructure, O’Day says. When F5’s load balancer detects a surge in demand coming across the network, it wakes up the needed VMs to deliver the required compute power. (BlueLock also is able to use BIG-IP LTM to gather usage and traffic data that the company can use in its provisioning algorithms.)
“F5 forms the centerpiece of our cloud deployments. It brings everything together to make the service happen. No other device can orchestrate the workflow like that,” he says. “We also knew F5 had the only technology in the management space that would allow us to implement new ideas, like our capacity-on-demand model. They have all the APIs and their iControl rules to let us do what we wanted.” (iControl is F5’s network-side scripting method for writing event-driven rules to basically customize how BIG-IP handles traffic coming and going.)
O’Day describes a typical BlueLock scenario: “You have three or four Web servers that can handle the average day of traffic, but then you have a seasonal or PR hit and get a large surge in traffic. Instead of having to build the infrastructure for peak traffic, even if it’s for only a month, we use our cloud technology and F5 to create one pool of servers, put them in the F5 load-balancing pool, then create another pool that fits on our physical servers. You’ve got the servers you need to run your application, plus reserve capacity, all parked in the same place. The rules would say, ‘Let this pool get up to 90 percent capacity, then start going outside into this other pool.’ You’ve basically got 200 percent capacity waiting but you’re only paying for 100 percent of it. Developers, in particular, like having this second pool of servers online for code testing but paying for only when they use it. We couldn’t make this happen without F5’s help.”
“We’re trying to help our clients avoid the capital expenses and the purchasing headaches and delays that are typically involved in adding to infrastructure to meet demand,” O’Day says. “The virtual cloud and F5’s technology lets us do that.”
At VMworld this past September, BlueLock’s F5-powered cloud was featured in a demo by VMware CEO Paul Maritz. As O’Day describes it, the scenario features an application that comes under sudden load and falls outside its SLA requirements, but is rescued when they’re able to merge the virtual cloud in Las Vegas with the BlueLock datacenter in Indianapolis and provision the necessary resources to solve the performance problem. (Watch here.)
“We think we fit very well into a cloud model,” says Lori MacVittie, F5’s technology marketing manager. “That’s the direction we’ve always been going -- to provide dynamic, flexible infrastructure and the capabilities to manage and speed up the application delivery process.”
Scalability is the first big thing cloud thing F5 tackles. “You have to be able to scale out those applications, of course, and many people look to a load balancer of some sort, but a simple load balancer is not smart enough,” MacVittie says. “So you have to move up to an application delivery controller, where there’s more intelligence and integration with other functionality. Along with being able to scale apps, you need the application control mechanism to be able to scale itself. You have to be able to scale the infrastructure, as well.”
This is where F5’s Viprion comes in. It’s a high-performance application delivery controller that works with BIG-IP LTM and scales on demand. Each Viprion chassis can support four blades. “Each blade is like an application delivery controller in itself. You can start with one, and if you need more power, you add a blade and it scales transparently and automatically,” MacVittie says. “You can add more power, on demand, without having to reconfigure anything. If you need more blades, you can add more Vibrions.”
The platform underlying F5’s products is its Traffic Management Operating System. “TMOS is our core platform, the basic kernel of our application delivery controller. It’s designed to make networks application-aware, and to give you intelligent control of your network. It’s highly optimized, very scalable, and architected to be customizable,” MacVittie says.
Jon Ottsik, senior analyst at Enterprise Strategy Group, says Viprion “combines state-of-the-art hardware with extremely good software chops. The result is a platform that can throw a lot of horsepower at a lot of application delivery tasks while remaining flexible to accommodate dynamically changing needs on a moment’s notice.”
Ottsik has high regard for what F5 has accomplished with its product and technology set. “In theory, the solution to the ADC [application delivery controller] bottleneck seems easy: simply virtualize ADC services like load balancing, caching, and SSL processing across a common compute platform, and then throw more processing power at the whole enchilada. … But this takes a heck of a lot of operating system, hardware, networking and applications expertise to pull off. Fortunately for large organizations, F5 Networks is one company rising to the challenge.”
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.