July 05, 2011
Cloud computing promises a new era of flexibility and control in providing HPC data center resources. While many of today’s top HPC installations employ dedicated server clusters, cloud computing with virtualized resources is making high performance computing available to broader environments.
Commercial enterprises can take advantage of HPC computations by deploying internal cloud capabilities or by sharing HPC resources in public clouds. In the cloud model, data center managers can mix and match computing, storage, and networking resources to provide an agile and highly flexible resource for customer applications.
To realize the full potential, this paradigm requires open, standardized interfaces between data center layers of compute resources, the network, and storage elements. While the industry has moved toward open computing and storage layers over the past few years, networking has remained largely proprietary.
Today, however, HPC data center owners can use open systems to unlock the network layer so they can get the most out of their data center architectures and in turn get the most out of their cloud deployments.
The Lagging Network Layer
The history of computing has been one of movement from proprietary to open standards. From the mainframe days in which computing, network, and storage resources were all proprietary, the industry has moved inexorably toward open, standards-based layers in the data center. For example, server and software vendors have embraced Intel X86 architectures with standard operating systems such as Windows, Linux, VMWare, and others. In the same way, storage vendors have coalesced around the standards of Fibre Channel, FCoE, and iSCSI.
At the same time, the pace of innovation has accelerated. Mainframe development cycles were measured in multiple years because all layers of the data center stack came from the same vendor. Now, with a whole ecosystem of standards-based products in development, the pace of change is measured in months. For example, a network switch vendor that relies on its own ASICs is tied to a three-year development cycle for product revisions, while a switch vendor using merchant silicon can introduce new products much more quickly.
In this environment, customers are less willing to be tied down to the pace of innovation of just one vendor. Industry leaders from the world’s largest social networking, search engine, and HPC environments are advancing the state of networking at a phenomenal pace by combining open products in their own way, rather than expecting a vendor to tell them how to build a data center.
Ultimately, all layers of the data center stack – computing, storage, and networking – should be open to deliver the maximum customer choice and flexibility. Architectures, automation, and ecosystems should all be open to support the widely varied needs of data center owners.
This move toward openness has made it easier to build dynamic, virtualized data centers in which compute resources can be created or decommissioned on the fly – the essence of cloud computing. Unfortunately, the network layer has not kept pace in the drive toward openness. Each of the largest networking vendors has developed proprietary software, control plane, or interconnect technology that limits user choices when it comes to compute servers and storage units. By limiting choice, the closed network layer forces users to build data centers in a specific way that may not meet their needs. In contrast, an open ecosystem enables architects to build data centers flexibly and quickly using the most advanced technologies.
An open systems framework unlocks the network to provide greater flexibility, performance, and manageability to cloud and conventional data centers. Based on an expanding portfolio of hardware, system and automation software, and partner/customer ecosystem services, an open systems framework enables data center architectures that are driven by customer needs rather than vendor needs.
Open systems networking is based on three elements: open architectures, open automation, and open ecosystems.
An open systems infrastructure relies on open, standards-based technology for interfaces, interconnect, control plane, and other aspects of network operations. The network thus supports any computing or storage solution that also supports open standards. This approach gives data center owners unrivaled flexibility in choosing solutions for their specific needs. No two data centers are exactly the same, so in providing this flexibility, an open systems approach meets varied data center needs as no other approach can.
Open systems architectures include switches for the core and the top-of-rack (ToR). In the core, data center owners should be able to choose either a traditional hierarchical architecture designed for traditional data centers or a next-generation distributed architecture optimized for fabric deployments. Both architectures must be standards-based and capable of advancing any existing environment with higher performance and lower cost structures.
At the top of rack, switches nominally provide aggregation points for servers and storage, but it is now possible to combine more functionality in the switch. Next-generation ToR switches can combine networking, applications, and storage networking. This type of switch eliminates extra servers and appliances by combining flexible application bays with 10/40GbE ports, bringing a new level of convergence to the ToR. Additionally, any of the Ethernet ports can also be configured for native Fibre Channel or Fibre Channel over Ethernet (FCoE) in software.
The application bays can include blade servers that enable users to run load balancing, firewall, security, management, or other applications directly on the switch to reduce complexity and streamline operations in the data center.
One of the traditional objections to open systems is the inability to manage them as a single entity. Open automation provides standards-based automation for data center operations such as bare metal provisioning, configuration management, and monitoring. Data center managers have the ability to automate other control or monitoring functions with standard scripting, using Perl or Python. Automation is critical for data centers of any size as it allows operators to dynamically stitch together network, compute, and storage resources. For maximum choice, automation shouldn’t be something that forces users down a single-vendor path. Open automation allows users to automate with their choice of solutions.
With the adoption of virtualization, data centers have become more responsive and efficient but more complex. IT managers must now manage hundreds to thousands of virtual machines and their associated storage and networks. Data center infrastructure must be more responsive, quickly adapting to changes in application requirements. Additionally, server, storage and network infrastructure can no longer be managed as separate silos, but rather as a single, dynamic environment.
While large, dedicated HPC installations tend to use single image servers to maximize computations, virtualization is enabling HPC capabilities to be offered to broader audiences with less total expense. Open automation addresses these management challenges using industry standards and common industry technology, allowing IT managers to deploy virtualized environments using best-of-breed technology. Standards such as Edge Virtual Bridging (802.1Qbg) will be instrumental in providing data centers with complete manageability of virtualized resources.
Open ecosystems bring together leading makers of standards-based go-to-market solutions and technology to offer unrivaled flexibility and choice in the selection of best-of-breed solutions for the data center. Open ecosystems are a critical and necessary element to unlock the full potential of data center deployments. Simply, put, there is safety in numbers – the more people trying to solve problems and innovate, the better. Ultimately, choice comes from having the broadest ecosystems.
Cloud computing is rapidly gaining acceptance as the primary model for delivering flexible compute resources with high agility and reduced management costs. However, the vision of cloud computing can’t be realized to its fullest potential unless the computing, networking, and storage layers of the data center are all derived from open, standardized interfaces and software. An open systems networking framework delivers the openness required at the network layer combined with flexible open automation software allowing HPC data center managers maximum choice with uncompromised price and performance.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.