August 24, 2010
Companies in competitive domains, such as financial services, digital media, text mining, and enterprise content management, create large data repositories containing large amounts of data collected from their daily operations. Analyzing this archived data can yield knowledge that drives future business and provides significant market advantages over competitors. Although companies could use specialized super computers, the custom development time and hardware costs are prohibitive.
Another approach is to use proprietary and custom-built high-performance computing (HPC) software platforms atop emerging cloud computing environments. This approach, however, has the following drawbacks:
• Price-per-core performance is not tied to linear gains in application speedups or compute processing. Conventional HPC software and hardware platforms are cost-prohibitive since they do not accelerate performance commensurately to the investment of resources allocated and do not adapt dynamically to changing workloads and resource availability.
• Custom development and integration. Conventional HPC software platforms require extensive manual development and integration of custom server and application programming before they can work (and many common and legacy apps cannot be modified unless they are redeveloped).
• Tied to modified apps. Once applications are customized, they are locked in to a particular HPC platform and deployment configuration, and cannot leverage updates without redoing the intense customization.
• Complex setup with no support for automated plug and play. Conventional HPC software platforms require complex setup and customization to adjust the load manually on all pro¬cessors in the network since they don’t have automatic adaptive load balancing.
What is needed, therefore, are solutions that can leve¬rage hardware and software innovations in distributed and parallel computing, while simultaneously reducing the learning curve and effort needed to incorporate these innovations into mission-critical applica¬tions running in cloud environments. In particular, solutions are needed to map compute-intensive applications to high-performance cloud computing environments that provide the following capabilities.
Achieving Extreme—Yet Cost-Effective—HPC Cloud Performance
Cost-effective, HPC solutions for cloud environments should have the following features:
• Dynamic, adaptive, and real-time load management and equalization to utilize and distribute the workload in real-time across all available cloud computing and networking resources. This load equalization ensures every processor/core is near-optimally utilized to maximize computing performance.
• Transparent scalability. It should be possible dynamically and automatically react to changes in operating environment and utilization variations across all the processors and allocates new and existing workload to all the available under-utilized processors/cores without changing application software or disrupting ongoing operations.
• Optimized common data delivery and synchronization. If multiple invocations involve the same input parameter, it should be possible to send common input parameter just once (before any invocation is made), which optimizes data delivery for applications with large common data. Application developers can then simple send a reference to the common input parameter and the cached common data will be added automatically to the invocation on each host, thereby reducing communication latency on each invocation.
• Ultra fast data transfer. Unlike conventional middleware, that bottlenecks application data between clients and servers by using text-based protocols (e.g., HTML, XML, and SOAP), high-performance cloud computing software should automatically generate and use optimized binary protocols that transfers results much faster.
These features enable the fastest computing platform possible, substantially accelerating application performance in cloud environments relative to conventional HPC software platforms.
Minimal Development Effort
The time needed to develop HPC applications in cloud environments should be minimized by supporting the following features:
• No server-side development. It should be possible to run functions and algorithms in parallel without requiring any server-side development to allow quicker development and utilization of existing servers. Conventional HPC software platforms cannot be used in cloud computing since external data centers will not allow tampering with their servers.
• Minimal application development. Cloud-based HPC software should be designed as an integrated set of component-based frameworks containing many “knobs” that can be extended and tuned transparently to support new user requirements and application feature enhancements easily and quickly.
Rapid Configuration and Deployment
The time needed to configure and deploy HPC applications should be minimized by supporting the following features:
• Automatic parallel configuration that quickly configure existing non-parallel application code to run in parallel execution by leveraging the processing power of the entire cloud. It should be possible to configure applications to run in collocated and/or distributed parallel deployments that maximize the use of available cloud computing resources.
• Platform independence. Any distributed and/or collocated computation should be deployable on any popular operating system or platform with complete and automatic interoperability. This platform independence means aWindows application can leverage the processors on Linux, Solaris, AIX, Mac, and other operating systems without having to re-write or port existing applications.
Intuitive Use and Administration
HPC applications in cloud environments should be intuitive to use and administer by supporting the following features:
• Automatic load equalization. Unlike conventional HPC software platforms, HPC cloud software should automatically equalize the load between all of the available (heterogeneous and/or homogeneous) processors in the cloud adaptively, thereby eliminating the tedious trial and error needed to maximize performance.
• Automatic service discovery. HPC cloud software should dynamically discover and optimize all processors available at runtime. When new machines are added or removed from a deployment, the software should automatically reconfigure accordingly,which ensuresmaximum performance at all times with little or no administrative input.
• Automatic real-time monitoring and auditing. HPC cloud software should provide powerful tools for automatically monitoring and transparently auditing huge volumes of application and system events. These tools can help minimize the total cost of ownership by enabling real-time decision making that is more accurate and relevant than is possible with manual monitoring and current auditing approaches.
• Persistent and recoverable. HPC cloud software should provide self-adaptive, fault-tolerant architectures that ensure applications automatically recover and transparently re-execute requests on different servers if existing servers disconnect or fail.
This article presents drawbacks and potential solutions towards delivery of ultra high performance in the cloud. Examples of domains which can benefit from distributing the workload across an ultra high performance cloud include: financial risk assessment and modeling (e.g. Value-At-Risk and historical calculations), real-time decision-making based on algorithmic feedback (e.g. market making, electronic strategy arbitrage and high-frequency trading), processing large graphical images (e.g. medical MRI and video animation) and document sets (e.g. format conversion), and processing, archiving, storing and searching individual documents and content repositories for enterprise content management systems (e.g. news websites and web encyclopedias).
About the Authors
Dr. Douglas C. Schmidt is a Professor of Computer Science at Vanderbilt University. He leads technology strategy and solutions, community development, and external strategic partnerships at Zircon Computing. He has published 9 books and over 400 technical papers that cover a range of research topics, including patterns, optimization techniques, and empirical analyses of softwar frameworks and domain-specific modeling environments that facilitate the development of distributed real-time and embedded (DRE) middleware and applications running over high-speed networks and embedded system interconnects. Dr. Schmidt also has served as a Deputy Office Director and a Program Manager at DARPA, where he led the national R&D effort on middleware for DRE systems. In addition to his academic research and government service, Dr. Schmidt has two decades of experience leading the development of ACE, TAO, CIAO, and CoSMIC, which are widely used, open-source DRE middleware frameworks and model-driven tools that contain a rich set of components and domain-specific languages that implement patterns and product-line architectures for high-performance DRE systems.
Ron Guida joined Zircon Computing in 2007 as Director of Worldwide Sales and Marketing. Since 1981, Ron has helped launch and grow a number of technology companies such as VENUSA, Covalent Systems, Bluestone, u1.net, ETSEC and Optaros. Over his career, Ron has helped to penetrate key accounts such as GE, The Vanguard Group, CNN, JPMorgan Chase, Verizon, the Walt Disney Company, Keystone Foods, and The NYSE. Ron is a graduate of Drexel University, Bachelor of Science, Business, in 1981.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.