August 28, 2008
The emergence of cloud computing as a way to build and deliver always-on, pay-by-the-drink IT services has emerged as one of the hottest topics this year. Major players -- including Amazon, EMC, Google and IBM -- have promoted offerings that proclaim near infinite-scale computing, storage, database and related Web services that can be easily leveraged by talented developers with a browser.
This paper will specifically focus on how Web-scale and commercial enterprises can benefit from current cloud computing technology trends to build a new class of datacenters that are more autonomous and dynamic than traditional implementations. It also will examine how new cloud computing models enable the rapid scaling and reallocation of resources to a wide variety of customers, delivering core cost and agility benefits to purveyors of cloud computing services. And, specifically, we will explore how the change in application workloads is driving a need for accelerated file services to maintain optimized performance.
First, we’ll look at the most frequently mentioned companies and implementations around cloud computing, the latter of which generally fall into three cloud categories: applications, platforms and infrastructure.
In the applications category, Salesforce.com shines as the premier example of delivering a use-specific service over the Internet. On the platform side, Google’s App Engine represents developer-level access to a range of compute, database and storage functions within a specified framework. In the case of Google, that framework relies primarily on the Python programming language, but other options at different service providers also exist. On the infrastructure side, Amazon offers a complete set of offerings from raw CPU horsepower (EC2) to chunks of data storage (S3) through its Web services unit.
Of course, the lines between these companies and categories can easily blur. Salesforce.com offers its version of a platform through its Force.com initiative. And Amazon offers more than just CPUs and storage with services like SimpleDB, essentially providing database functionality as a platform. But the basic categories work to define the functionality, even across companies.
Figure 1: Cloud Computing Segments
Common Characteristics of Cloud Purveyors
All of the major players offering cloud services (including applications, platforms, and infrastructure) share common architectural approaches that benefit any Web-scale or enterprise datacenter. These include the ability to:
Adopting Cloud Computing Architectural Approaches
There are several approaches to building cloud computing datacenters. One approach is to architect everything from the ground up, including the file systems, clustering technology and application software, as in the case of Amazon or Google. These companies have made scaling their compute infrastructures a top business priority and invested heavily in technology research and development.
Other approaches make use of more commercially available offerings. For example, virtualization solutions are a key enabler of rapidly assigning resources, such as server instances, within a compute pool for flexibility and cost savings, and are available in commercial and open-source implementations. Hardware and software products providing similar functionality at the networking or storage layers complete end-to-end datacenter flexibility.
Both the ground-up and commercial-offering approaches aim for self-healing architectures, support for well-defined service level agreements and the ability to handle a large number of concurrent users. There also are energy efficiency goals achieved by pooling compute resources into a fewer number of large datacenters, leveraging industry-standard hardware, software and networking infrastructure, and being able to handle the proliferation of rapidly growing volumes of data.
File Acceleration for Cloud Computing Datacenters
Of particular interest to datacenter architects is how to optimize storage infrastructure to handle the cloud computing requirements outlined above. This essentially represents the ability for a larger number of users to access a larger pool of storage while sustaining top service levels.
Change in workloads
Before examining the architectural detail, we need to take a quick look at changing workloads. Part of the impetus for cloud computing is based on these dramatic shifts, as shown in Figure 2. In the initial stages, users were connected to unique data on their computer. As the Web evolved, we migrated to shared, consolidated content. Now we are in a stage of contributed dynamic content that often push current infrastructures beyond what they were initially able to handle.
Figure 2: Changing Workloads Leads to More Interactivity
Challenges of Today’s Architectures
Because of the workload changes facing Web-scale and enterprise businesses, new challenges are emerging within current computing architectures. One of the most pressing challenges is file access bottlenecks. Simply put, when dozens to hundreds of servers are trying to access the same data, it can lead to storage or I/O bottlenecks because the underlying disk-based storage systems cannot keep up with the massive amount of compute power at the server layer, as shown in Figure 3. This often leads to excessive storage over-provisioning and result in high costs.
But the main impact of this file access bottleneck is the poor end-user experience for Web-scale application users or the poor productivity for enterprise application clients. In both cases, the delays caused by insufficient application performance take a direct hit on the business.
Figure 3: Challenges of Current Architectures
New Options to Deliver Accelerated File Services with Centralized Caching
Many enterprise and Web-scale datacenters are solving their file access bottlenecks with centralized caching, particularly with network-centric, memory-based approaches.
By placing a pool of shared high-speed memory in the network to act as a central caching namespace, datacenter managers can instantly increase the performance of I/O-constrained applications. For example, an application that operates across multiple servers requiring access to a consolidated file repository can now retrieve files 10-50x faster than if the servers were on a conventional disk-based storage system. The caching appliance keeps the frequently requested files -- or portions of files -- in memory to deliver such improvements. And since the caching appliance automatically keeps the content up-to-date based on usage patterns, no ongoing active management is required.
Centralized caching appliances integrate easily with the core concepts of cloud computing architectures, as shown in Figure 4.
Figure 4: Centralized Caching Fits Cloud Computing Architectures
By serving data from memory, compared to mechanical disks, scalable caching appliances can support thousands of simultaneous connections to Web and application servers. This accommodates increased loads without having to over- provision disks.
Handle mixed workloads
Because memory-based caching appliances deliver low-latency response times, they can easily support mixed workloads in a single appliance. One caching appliance can support different Web, application and database servers, as well as multiple file systems, including clustered file systems.
Lower total cost and smaller footprints
Compared to trying to architect for high I/O operations per second from disks, a memory-based caching appliance dramatically reduces the overall cost and space required to fulfill such performance requirements. When deployed as a network resource, a memory-based caching appliance ensures high utilization and efficiency.
Reliable service delivery
Caching appliances help offload storage systems from I/O intensive workloads essentially streamlining the I/O traffic to guarantee reliable service delivery. Where in the past, excessive I/O loads can take their toll on disk-based systems and bring them to a crawl, complementing such systems with memory-based caching appliances provides the horsepower to guarantee continued service.
Data-readiness (large volumes, high-file-count repositories)
Caching appliances enable data-readiness for both large volumes of data as well as high-file-count repositories. For high-capacity installations, caching appliances perfectly complement high-capacity, low-cost disk storage such as SATA by delivering performance from that storage dynamically. For high-file-count installations, caching appliances cache index information that speeds up the search, location, and delivery of files.
Deploying new applications can be a burdensome process when storage has to be configured for performance. With a caching appliance, virtually any level of storage performance can be instantly delivered to provide enough I/O for the application. This frees architects from having to over-provision storage for IOPS.
Cloud computing datacenters represent a new wave of scalable computing architectures that deliver more data to more users more economically than ever before. Paying attention to the technology trends behind the “datacenter” aspects of cloud computing allow corporations to employ the same techniques to build high-scale, low-cost infrastructure. As application traffic patterns, particularly storage and I/O patterns change, the benefits to replicating these cloud approaches will become increasingly important.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.