April 03, 2012
The age of "mainstream supercomputing" has been forecast for some years. There has even arisen something of a debate as to whether such a concept is even possible – does "supercomputing," by definition, cease being "super" the moment it becomes "mainstream?"
Whether mainstream supercomputing is here or ever literally can be, however, it is indisputable that more and more powerful capabilities are becoming available to more and more diverse users. The power of today's typical workstations exceeds that which constituted supercomputing not very long ago.
The question now is, where all of this processing power – increasingly "democratized" – might eventually take the world? There are clues today of what mind-blowing benefits this rapidly evolving technology might yield tomorrow.
Better Products Faster – and Beyond
Supercomputing already undergirds some of the world's most powerful state-of-the-art applications.
Computational fluid dynamics (CFD) is a prime example. In CFD, the flow and interaction of liquids and gases can be simulated and analyzed, enabling predictions and planning in a host of activities, such as developing better drug-delivery systems, assisting manufacturers in achieving compliance with environmental regulations and improving building comfort, safety and energy efficiency.
Supercomputing has also enabled more rapid and accurate finite element analysis (FEA), which players in the aerospace, automotive and other industries use in defining design parameters, prototyping products and analyzing the impact of different stresses on a design before manufacturing begins. As in CFD, the benefits include slashed product-development cycles and costs and more reliable products – in short, better products faster.
Weather forecasting and algorithmic trading are other applications that today rely heavily on supercomputing. Indeed, supercomputing is emerging as a differentiating factor in global competition across industries.
More Power to More People
As supercomputing's enabling technologies – datacenter interconnection via fiber-optic networks and protocol-agnostic, low-latency Dense Wavelength Division Multiplexing (DWDM) techniques, processors, storage, memory, etc. – have grown ever more powerful, access to the capability has grown steadily more democratized. The introduction of tools such as Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL) have simplified the processes of creating programs to run across the heterogeneous gamut of compute cores. And there have emerged offers of high-performance computing (HPC) as a service.
Amazon Web Services (AWS), for example, has garnered significant attention with the rollout of an HPC offering that allows customers to select from a menu of elastic resources and pricing models. "Customers can choose from Cluster Compute or Cluster GPU instances within a full-bisection high bandwidth network for tightly-coupled and IO-intensive workloads or scale out across thousands of cores for throughput-oriented applications," the company says. "Today, AWS customers run a variety of HPC applications on these instances including Computer Aided Engineering, molecular modeling, genome analysis, and numerical modeling across many industries including Biopharma, Oil and Gas, Financial Services and Manufacturing. In addition, academic researchers are leveraging Amazon EC2 Cluster instances to perform research in physics, chemistry, biology, computer science, and materials science."
These technological and business developments within supercomputing have met with a gathering external enthusiasm to harness "Big Data." More organizations of more types are seeking to process and base decision-making on more data from more sources than ever before.
The result of the convergence of these trends is that supercomputing – once strictly the domain of the world's largest government agencies, research-and-education institutions, pharmaceutical companies and the few other giant enterprises with the resources to build (and power) clusters at tremendous cost – is gaining an increasingly mainstream base of users.
The Political Push
Political leaders in nations around the world see in supercomputing an opportunity to better protect their citizens and/or to enhance or at least maintain their economies' standing in the global marketplace.
India, for example, is investing in a plan to indigenously develop by 2017 a supercomputer that it believes will be the fastest in the world – one delivering a performance of 132 quintillion operations per second. Today's speed leader, per the November 2011 TOP500 List of the world's fastest supercomputers, is a Japanese model that checks in at a mere 10 quadrillion calculations per second. India's goals for its investments are said to include enhancing its space-exploration program, monsoon forecasting and agricultural outputs.
Similar news has come out of the European Union. The European Commission's motivation for doubling its HPC ante was reported to strengthen its presence on the TOP500 List and to protect and create jobs in the EU. Part of the plan is to encourage supercomputing usage among small and medium-sized enterprises (SMEs), especially.
SMEs are the focus of a pilot U.S. program, too.
For SMEs who are looking to advance their use of existing MS&A (modeling, simulation and analysis), access to HPC platforms is critical in order to increase the accuracy of their calculations (toward predictive capability), and decrease the time to solution so the design and production cycle can be reduced, thus improving productivity and time to market," reads the overview for the National Digital Engineering and Manufacturing Consortium (NDEMC).
The motivation here is not simply to level the playing the field for smaller businesses that are struggling to compete with larger ones. Big OEMs, in fact, help identify the SMEs who might be candidates for participating in the NDEMC effort launched with funding from the U.S. Department of Commerce, state governments and private companies. One of the goals is to extend the product-development efficiencies and -quality enhancements that HPC has already brought to the big OEMs to the smaller partners throughout their manufacturing supply chains.
Reasons the NDEMC: "The network of OEMS, SMEs, solution providers, and collaborators that make up the NDEMC will result in accelerated innovation through the use of advanced technology, and an ecosystem of like-minded companies. The goal is greater productivity and profits for all players through an increase of manufacturing jobs remaining in and coming back to the U.S. (i.e. onshoring/reshoring) and increases in U.S. exports."
Frontiers of Innovation
Where might this democratization of supercomputing's benefits take the world? How might the extension of this type of processing power to mass audiences ultimately impact our society and shared future? Some of today's most provocative applications offer a peak into the revolutionary potential of supercomputing.
For example, Harvard Medical School's Laboratory of Personalized Medicine is leveraging Amazon's Elastic Compute Cloud service in developing "whole genome analysis testing models in record time," according to an Amazon Web Services case study. By creating and provisioning scalable computing capacity in the cloud within minutes, the Harvard Medical School lab is able to more quickly execute its work in helping craft revolutionary preventive healthcare strategies that are tailored to individuals' genetic characteristics.
Other organizations are leveraging Amazon's high-performance computing services for optimizing wind-power installations, processing high-resolution satellite images and enabling innovations in the methods of reporting and consuming news.
Similarly, an association of R&E institutions in Italy's Trieste territory, "LightNet," has launched a network that allows its users to dynamically configure state-of-the-art services. Leveraging a carrier-class, 40Gbit/s DWDM solution for high-speed connectivity and dynamic bandwidth allocation, LightNet supports multi-site computation and data mining – as well as operation of virtual laboratories and digital libraries, high-definition broadcasts of surgical operations, remote control of microscopes, etc. – across a topology of interconnected, redundant fiber rings spanning 320 kilometers.
Already we are seeing proof that supercomputing enables new questions to be both asked and answered. That trend will only intensify as more of the world's most creative and keenest thinkers are availed to the breakthrough capability.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.