September 10, 2010
Andrew Keane, General Manager, Tesla High Performance Computing at NVIDIA published a rousing call to arms in All Things Digital with his treatise on the “Crisis in Computing” and its effects on the American economy. Echoing the Council on Competitiveness and its statement that HPC is a cornerstone of economic well-being and leadership, Keane suggests that America is lagging behind in the key areas it once dominated, and that once we have fallen back, climbing back to the top is a laborious, if not nearly impossible task.
On a more refined level (and harkening back to NVIDIA’s aims as well) Keane argues, “the traditional CPU-based technology that once put America in the lead is now the anchor holding us back. Our legacy computing is no longer scaling cost-effectively and power-efficiently enough. The effects of this lost leadership will soon be severely felt in every aspect of American business and economic life unless we decide to do something about it.”
This is a common sentiment but the problem he identifies in terms of “legacy” computing does not have a simple solution, in part because HPC is so broadly-encompassing and scattered across academia and industry. Keane states that it’s now “past time to for private industry and the public sector to get our HPC act together, before other nations steal the show” but this is certainly easier said than done.
It’s difficult to counter conjecture that America is, indeed, quickly falling behind in terms of HPC-enabled development. The most commonly proffered example to represent the dwindling competitive edge is China, with its swift developments—all made on our processors. In Keane’s view, coming as he does from the world of GPUs, China (and Europe, for that matter) are “jumping straight into next-generation, hybrid HPC by adding graphics processing units (GPUs) to drive far better price, efficiency and performance.” The result is that competitors are able to deploy tremendous capability at a lower price.
Keane is stating in no uncertain terms that GPU computing is a key to improving, refining, and building upon the systems and developments of the past—but that the past systems are the only thing still lurching us forward. This is a bold statement to make but it’s certainly not without significant merit. Supercomputing capacity matched with bleeding edge visualization is the key to emerging research and developments that are reliant on visualization, simulation and modeling but if we rely on what we have already (and there are good reasons for that, of course since HPC is…duh…incredibly expensive) and do not expand our horizons, the knowledge economy that is beginning to sustain us now more than any other time in American history will crumble. Like will, not might. It’s down to the wire.
As Keane notes…
“To sustain and extend our lead in high performance computing, we don’t have to revive the decades-old debate about industrial policy and the government picking winners through massive bets on industry sectors. We just need to spend smarter to get cost-effective hybrid HPC on the national agenda, and equip our best minds with the computing capacity they need to innovate and create jobs.”
Full story at All Things Digital
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.