September 28, 2010
Last week during a chat at the GPU Technology Conference (GTC) in San Jose, Sumit Gupta, product manager for NVIDIA’s Tesla GPU Computing Group, suggested that GPUs in the cloud are just a natural evolution of HPC in the cloud.
It's hard to argue with Gupta's point; once you have an application in the cloud already, the GPU enhances and accelerates it, thus lending researchers added capabilities, particularly in the arenas of climate modeling, computational fluid dynamics, and a wide range of other application areas that are cloud-ready.
If we are to take a broader look at the basic relationships between HPC and clouds in terms of their development and use by providing the reminder that we have development push coming from different sides of the spectrum. Furthermore, as GPUs become more widely used in HPC, it makes sense that they would become more widely available in the cloud, although certainly one should make the distinction that this “cloud” is more like “on demand” since there is no virtualization involved.
From the Bottom Up
In the beginning, HPC was rooted in science and discovery back when weather modeling was the first “killer app” before it began to trickle into the enterprise. What we’re seeing now with cloud represents a major shift as well, but this time the roots of innovation are coming from the enterprise up to the world of scientific computing. Instead of coming from climate research centers, the movement is being driven by the customer side of the computing equation; the virtualization of the office is now driving the virtualization and on-demand era for the scientific and technical computing world.
NVIDIA's Tesla product manager stated that in his view, “the true promise of the cloud is being able to handle bursts” and these bursts, not to mention the capacity itself can be delivered by clouds—whether you define them as virtualized servers or simply as a rented infrastructure. It is this capability, which is now made available through lower costs, both on an opex and capex level that is driving growth in the enterprise markets, not to mention the broader market for GPUs.
Gupta is seeing dramatic interest in GPU technology in a number of areas that are already primed for virtualized environments, including remote transcoding of video and big data analytics. While he admits that these two areas are not a “slam dunk for GPUs, they are definitely accelerated by GPUs” and customers are repeatedly asking about GPU acceleration. Oftentimes, the applications that are in question do not fall neatly into the category of HPC but they are, without argument, high performance computing applications that require extreme computing capabilities.
Transcoding and similar enterprise applications already have needs that are well met by cloud or on-demand computing because very often such needs are “bursty” in nature and do not require the massive machines required to crunch the level of data. Take Netflix, for example, a company that has massive transcoding needs that must be met in a relatively short time frame—sometimes as fast as 24 hours. When demand for a title suddenly surges, the company needs to transcode that same video into over a hundred different formats to suit the many device types and resolution requirements but that same vast need might not be present the next day, of for that matter the next week.
The convergence of GPU acceleration and on-demand access to vast computational resources via the cloud or an on-demand GPU-accelerated resource has significant value for the same types of customers who already have been able to benefit, even just in theory at this early cloud stage, from on-demand access to resources. Those with “bursty” needs are numerous, but only recently has this need been matched with the types of HPC resources required to handle the application and data-specific demands.
Among other parallels, enterprise and scientific computing is producing ever-larger sets of data to be analyzed and combed through, but again, a great deal of the innovation on this front is being propelled forward by the enterprise since acceleration—real-time results based on such large volumes—is yielding immediate monetary benefit. For instance, a company called Milabra that was present at the Emerging Companies Summit at GTC is powering their photo recognition software with GPUs to make real-time connections between web-based images and advertising. The company’s unique application recognizes, for example, the shape of a toddler’s head and features and immediately turns this around to an ad-serving platform that then can serve an ad—in microseconds—that is for toddler toys.
The incentive powering real-time results on huge datasets is clear here; accelerating the time for the application to achieve its results has a perfect match in real-time revenue. The sooner that application can recognize the target and turn this around to the platform, the sooner funds flow. It’s a beautiful thing and while it is certainly not rooted in scientific or academia-driven HPC—this blend of on-demand or cloud resources matched with accelerated computation has its benefit to science and technical computing. The needs here are bursty, are reliant on real-time results, and for a company like Milabra (a startup) do not require NSF funding to get off the ground. Seeing a pattern here?
Many of the scientific users and computer scientists themselves are invested in data analytics in just the same way that companies with real-time concerns are, just for different purposes. While they may not be as reliant on the instant photorealistic responses of Autodesk software delivered via the cloud that allows for nearly instant rendering of complex models and they may not have the same concerns as Netflix or a large e-commerce website, the level of computation is extreme and benefits are being derived not only from the GPGPU movement, but from this being available to a new class of users.
A Natural Evolution?
While GPUs cannot be virtualized, there are still some companies, including PEER1 Hosting and Penguin Computing who are calling their GPU on demand services cloud. While it seems to be a waste of time arguing about the issue of whether or not this is cloud or not any longer (let’s just agree that the on-demand portion is the essence here once and for all), these companies are poised for growth given the high costs of hardware.
While GPU clusters are less expensive in general, in an era where scientists and enterprise can accelerate their applications and take advantage of this in an on-demand fashion, it’s hard to find fault with the prediction that over the next few years GPGPU will find its way into some mainstream arenas in a far bigger way than we could have imagined a couple of years ago. Gupta suggests that finance and oil and gas companies are two of the biggest potential customer bases they’ve seen expressing interest in “cloud” GPU capabilities but it does take them time to evaluate their options.
When asked if it seemed that more companies were hoping to offer GPU-as-a-Service, Gupta stated that they have been talking to several cloud providers and that as more mainstream applications become available—applications that used to run on workstations and required major operational costs—there will be a greater move in their direction. Already Matlab and Autodesk’s foray into the on-demand era has proven rather successful, at least from this early point, so the future is wide open for other applications and vendors to step in and offer the capability for users to tap into their cloud. There is nothing preventing this from happening now, after all.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.