July 26, 2010
This morning, synchronized with the opening of the 37th Annual Siggraph International Conference, NVIDIA announced that it had partnered with PEER 1 to provide the industry’s first large-scale hosted GPU cloud. According to the announcement, the system will run the RealityServer 3D web application service platform to further enable animators, product designers, and others who rely on advanced 3D applications to propel their business or research forward without the need for an in-house CPU cluster.
Although this is not the first hosted GPU cloud in history, it is certainly the first of its kind in terms of scale. NVIDIA must have seen a growing market need to deliver its RealityServer platform enough to form the partnership to make it available to the masses--and this comes as big news to those who otherwise were barred from entry due to high upfront GPU cluster costs.
According to Sumit Gupta, Product Manager at NVIDIA’s Tesla GPU Computing (CUDA) Group, the market need for such a cloud was clear for a wide range of HPC application and uses. “There is widespread demand for hosted GPU clouds for markets such as financial services, 3D application rendering (with RealityServer and iray), scientific computing, and pharmaceutical and bioinformatics applications.”
Gupta notes that such a cloud is not a new concept for the company. He stated, “Similar GPU clusters have been built and used for quite some time. Several supercomputing centers and government labs have GPU clusters that researchers log into remotely in a similar fashion. In other words, the use of GPUs in such clouds is well understood and established.”
Prior to NVIDIA’s GPU-as-a-Service announcement, most of their scientific and industrial users were deploying their GPU applications on their own in-house GPU clusters. However ,with the ability to have on-demand access to comparable services, this not only lengthens the ability to reach new users, but provides the possibility for a new competitive landscape for business reliant on GPU clusters for their core operations.
Gupta stated that in addition to these possibilities, customers have the capability to test scaling of their application on a large GPU system before they invest in their own and possibly avoid buying their own if the investment looks unsteady. Furthermore, those who do already have an in-house GPU cluster can offload peak demand to the cloud instead of waiting or postponing workloads.
So, what would users be sacrificing if they chose to run their GPU applications in the cloud versus on site? According to NVIDIA’s Gupta, “there are no performance limitations for compute-intensive applications. For graphics-intensive applications, this cloud is useful for remote rendering, but not for interactive graphics rendering.”
While this still extends the reach of the NVIDIA RealityServer platform, these limitations do not generally touch the target audience for this announcement. When asked who this cloud news is aimed at, Gupta replied that their GPU cloud “has been primarily built for customers who use the compute capability of GPUs. NVIDIA’s GPUs are based on the massively parallel CUDA architecture, which enables the GPU to be used both for graphics and general purpose computing. NVIDIA’s GPU is programmable using C, C++, and Fortran as well as driver APIs like OpenCL, DirectCompute, and OpenGL.”
Larger Servings of Reality for More Users
The RealityServer platform for cloud computing, which is in effect a combination of GPUs and software that sends highly realistic 3D applications across the web was announced in October, 2009. It allowed for the development of more complex 3D web-based applications and opened the door to opportunity for developers and enterprises alike due to the extended capabilities that went beyond other 3D application development and deployment options.
For instance, the RealityServer software utilizes iray technology, which is “the world’s first physically correct ray-tracing renderer that employs the massively parallel CUDA architecture of NVIDIA’s GPUs to create stunningly accurate photorealistic images by simulating the physics of light in its interaction with matter. Because ray tracing is one of the most demanding computational problems, iray technology is designed to take advantage of the parallel computing power of NVIDIA Tesla.
In the announcement last year for the RealityServer, Fernando Toledo from the Virtual Reality Center at the National Institute for Aviation Research at Witchita State University discussed how the biggest problems prior to the RealityServer involved “managing and visualizing massive datasets while keeping that data secure…using RealityServer for virtual prototyping, design reviews and remote visualization solves those issues.” While this is, of course, a glowing review of the product since it appeared in the release of the news in 2009, it does represent the scope of possibilities on an individual research area level for researchers in distinct areas. Making this available without a significant investment in hardware is where the beauty of this announcement lies.
Just as it is the story of cloud and SaaS models in general, especially as we move forward with progress on the security and privacy fronts, the bigger piece of news in this announcement brings is that it is increasingly possible for smaller companies to get off the ground without significant upfront investment. Using operating expenses to leverage a large-scale GPU cloud will leave companies more available to concentrate on product and business development rather than deal with the expense and maintenance of a GPU cluster.
The related story outside of this is seeing how an increase in accessibility will bring greater development of 3D applications in general. Is it possible that there is a generation of developers waiting for the chance to jump into RealityServer who have been unable to in the past due to high barriers of entry? If this plays out, there could be a host of new 3D applications flooding the market, which will create a win-win situation both for the developers, the users, and of course, NVIDIA and its hosting partner PEER1.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.