Planting CUDA Seeds
NVIDIA is continuing to push hard on CUDA, the company’s C-based software environment for GPU computing. With last month’s announcement of the first CUDA Center of Excellence at the University of Illinois at Urbana-Champaign, NVIDIA said it donated half a million dollars to the school.
The announcement also noted that “[u]niversities wishing to become CUDA Centers of Excellence must teach a CUDA class and use CUDA technology in their research, usually across several labs. In return, NVIDIA supports the school through funding and equipment donations, including help to set up a GPU computing cluster.” This is the same general idea behind the Sony, Toshiba and IBM (STI) Center of Competence program for Cell technology. The STI Center was established at Georgia Tech last year. Can a Larrabee Center of Distinction be far behind?
Big Time Protein Folding
On the hardware side, even NVIDIA’s non-Tesla gear is starting to show up in HPC applications. On Thursday, the company announced the big impact that the GeForce GPUs are having on Stanford’s Folding@Home project. The project aggregates donated computer cycles on desktop systems to calculate how different types of proteins fold. The idea is to help understand protein behavior as it relates to cancer, cystic fibrosis, Parkinson’s and other diseases.
Powered by CUDA-enabled software, the GeForce GPUs are doing yeoman’s work for the folders. According to a recent ExtremeTech report, there are currently over 7,000 GPUs running the Folding@Home program, yielding around 840 teraflops of application performance. The article goes on to say:
That’s somewhere around 110 gigaflops per GPU, on average. To put that in perspective, the regular windows CPU client is about one gigaflop per client (it’s a mix of the single-threaded client and the multi-core SMP version). The PS3 looks like it leads the pack with a total of 1,358 teraflops, but that’s from over 48,000 active PS3s. Each PS3 is actually delivering about 28 gigaflops apiece.
Those are all 32-bit floating point performance numbers, but if you can cure cancer with single precision, that’s fine with me.
Argonne’s Blue Gene/P Gets a Visual Buddy
On Wednesday, Argonne National Laboratory announced its plans to add a NVIDIA Quadro-based data analytics/visualization system to pair up with its Blue Gene/P supercomputer. The visualization system, named Eureka, will turn the torrents of data produced by applications running on Blue Gene into pretty pictures that make sense to mere mortals.
Apparently, 208 NVIDIA Quadro GPUs will be used to construct the system, which is being built by GraphStream, Inc. The hardware will consist of four racks of 1U boxes, with each box containing four Quadro graphics cards. According to a Dr. Dobbs article, the Eureka server building block is the SuperMicro 6015-UR. Each GPU box is hooked to two SuperMicro servers so that each compute server drives two GPUs. Visualization is driven by data that comes from a very large storage array, which is also hooked up to the Blue Gene machine.
The Manycore War of Words
Meanwhile, NVIDIA and Intel continued to spar on the GPU-Larrabee match-up. An article in Custom PC recorded Andy Keane’s reaction to Pat Gelsinger’s recent comments disparaging the CUDA technology. Keane is general manager of NVIDIA’s GPU computing group and took offense at some off-handed remarks made by the Intel exec earlier this month. Gelsinger, a senior vice president and co-general manager of Intel’s Digital Enterprise, started the brouhaha by claiming that GPGPU languages like NVIDIA’s CUDA will one day be nothing more than “interesting footnotes in the history of computing annals,” adding that Larrabee, Intel’s upcoming manycore processor, will be the solution that succeeds in the long term.
If you’re a self-respecting Nvidian, those are fighting words. From the article, here is the gist of Keane’s reaction:
[T]he high level of interest in CUDA “is causing Larrabee. Larrabee’s the reaction.” He then added that “these comments from Gelsinger; if we were not making a lot of headway do you think he’d even give us a moment’s notice? No. It’s because he sees a lot of this activity. The strategy is to try to position it [CUDA] as something scary and unique, and it’s really not; it’s something that’s very accessible.”
Next month, Intel may release a lot more details on Larrabee at SIGGRAPH and the Intel Developer Forum. An article published Tuesday in The Inquirer says the Larrabee developer boards are being shipped in November. If true, NVIDIA is going to have a much better idea what it’s up against real soon.
NVIDIA for Sale?
Finally, Simon Brew in UK’s IT PRO speculates whether NVIDIA could be bought out. The reasoning behind the speculation is the AMD-ATI merger, which was designed to synergize two successful technologies into a greater whole — or into a greater hole, as the case may be. Even Brew admits, “things haven’t quite gone to plan.”
His real argument is that in the long term, NVIDIA’s toughest competition will be Intel, not AMD. The implication is that the GPU vendor will have to be a lot bigger and brawnier to go up against the chip giant, especially in the expanding mobile graphics market, where Intel dominates. That’s a valid point. But so far, NVIDIA has been more nimble than Intel in the graphics arena, and it’s got a big head start at the high end of the market. I’m guessing NVIDIA figures it can still outrun the competition. We’ll see…