GPU Tech Conference Wrap-Up

By Michael Feldman

September 23, 2010

If there’s one take away from this week’s NVIDIA GPU Technology Conference (GTC), it’s that GPU computing has grown up. Having been to last year’s event, it’s amazing to see how many more academic researchers and companies are taking the technology seriously in 2010. The exhibition hall was twice the size of GTC in 2009, enough to accommodate the 100 or so vendors plying their GPGPU wares. As NVIDIA CEO Jen-Hsun Huang said in Thursday morning’s fireside chat session, “This is the year when applications developed on GPU computing go into production.”

There was so much activity centered on technical computing at this year’s event that at times it seemed like a CPU-less version of November’s Supercomputing Conference. That was also reflected in the exhibitor list, which included HPC stalwarts like IBM, HP, SGI, Dell, Appro, Supermicro, Microsoft, The Portland Group, Platform Computing, Mellanox, T-Platforms and at least a dozen others.

Application areas like seismic exploration, weather modeling, computer vision, and medical imaging are latching onto this technology quickly. Just slightly further behind are domains like biomolecular modeling, which appears to be ripe for the GPU. The Wednesday keynote by Dr. Klaus Schulten, a computational chemist at University of Illinois, Urbana-Champaign, highlighted some early benefits in this area.

Schulten and his team at UI have started applying GPU acceleration to a range of molecular simulations. In his work, Schulten is employing GPGPU technology to develop the concept of a “computational microscope,” which is designed for nanoscale examination of biomolecules and cells. This virtual microscope consists of basic chemistry and physics algorithms, NAMD software (which will soon offer a GPU port), and supercomputing hardware.

One application that Schulten talked about was modeling the flu drug Tamiflu to determine how the H1N1 (“swine flu”) virus developed resistance to it. He’s also using the technology to study such phenomenon as virus infections, how proteins are synthesized, the mechanism of photosynthesis, epigenetics, and quantum chemistry. Some of the work is being accomplished on GPU workstations, but the larger models use NCSA’s Lincoln supercomputer, a heterogeneous cluster constructed from Dell PowerEdge servers and S1070 Tesla servers. Speedups on applications varied, the best being the quantum chemistry application. In that case, a simulation run that took a day with a CPU, took just a minute on the GPU platform.

There were a couple of sessions on the military applications of GPU computing, which looks to be a lucrative area for this technology. One presentation, hosted by EM Photonics, illustrated how GPGPU technology is being employed to accelerate compute-intensive applications in this domain. For example, an advanced image processing application was able to enhance long-distance photographs blurred by atmospheric distortion. GPU acceleration made it possible to perform this digital enhancement in real-time, opening up new applications for warfare and security operations. Other apps include electromagnetics simulations and CFD — the latter being used to simulate aircraft landings on carriers. Depending on the military scenario, the GPU platform could be a desktop machine, an embedded system, or a cluster.

Other GPU computing applications that got some exposure at GTC this year are business intelligence, complex event processing, and speech recognition — three areas that up until now would not have been associated with graphics processors. And of course there were a plethora of esoteric research applications, for example, Using GPUs for Real-Time Brain-Computer Interfaces — something that would have come in handy at GTC this week, give the overload of sessions, posters, exhibits, and after-hours partying.

This also looks to be a breakout year for ISV support of GPGPU in HPC. At the event, ANSYS announced it would be incorporating GPU acceleration into its popular engineering modeling and analysis solution, ANSYS Mechanical. That product is slated for release later in the year. And although SIMULIA and Livermore Software Technology Corp. (LSTC) made no formal announcements this week, two GTC presentations on Thursday suggest they also will be bringing out GPGPU-support for their flagship products (Abaqus FEA and LS-Dyna, respectively) within the next few months.

Even though GTC was more about developers and applications, there were a few sessions highlighting some of the larger GPU supercomputers deployed, or about to be deployed. In this latter category is TSUBAME 2.0, Tokyo Tech’s next-generation 2.4 petaflop super, which will be stuffed to the gills with 4,244 Tesla M2050 GPUs. In Tuesday’s presentation by Satoshi Matsuoka, he spotlighted some of the cutting-edge apps that will be running on the new machine. This includes ASUCA, Japan’s next-generation weather forecasting code that has been completely ported to the GPU (and reportedly took a year to do so). The result is that they will have a weather modeling application that is faster than real-time and works at resolutions of 0.5 km. According to Matsuoka, TSUBAME 2.0 is installed and undergoing stress tests, and will be formally announced in early October — so expect more coverage to follow.

If 2.4 petaflop supers don’t impress you, you’ll just have to wait a bit. Thanks to a brief peek at NVIDIA’s roadmap on Tuesday, the next generation of NVIDIA GPUs, Kepler, is slated to arrive in 2011. As Jen-Hsun Huang noted, “GPU computing is just starting. It’s nothing compared to what you’re going to have in a couple of years.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Illinois Considers $20 Billion Quantum Manhattan Project Says Report

May 7, 2024

There are multiple reports that Illinois governor Jay Robert Pritzker is considering a $20 billion Quantum Manhattan-like project for the Chicago area. According to the reports, photonics quantum computer developer PsiQu Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of what it is like to orbit and enter a black hole. And yes, it c Read more…

2024 Winter Classic: Meet the Mentors Round-up

May 6, 2024

To make navigating easier, we have compiled a collection of all the mentor interviews and placed them in this single page round-up. Meet the HPE Mentors The latest installment of the 2024 Winter Classic Studio Update S Read more…

2024 Winter Classic: The Complete Team Round-up

May 6, 2024

To make navigating easier, we have compiled a collection of all the teams and placed them in this single page round-up. Meet Team Lobo This is the other team from University of New Mexico, since there are two, right? T Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hopes to fill a big software gap with an agreement to acquire R Read more…

2024 Winter Classic: Oak Ridge Score Reveal

May 5, 2024

It’s time to reveal the results from the Oak Ridge competition module, well, it’s actually well past time. My day job and travel schedule have put me way behind, but I am dedicated to getting all this great content o Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Leading Solution Providers

Contributors

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire