December 02, 2010
An increasing emphasis on mobile applications is emerging for some users of high-performance computing as there are new, innovative ways to access user interfaces to sophisticated platforms and input or extract information that resides on the cloud in real-time.
GPU giant NVIDIA’s CEO sees these (and a much wider range of general consumer applications) as the way forward via the parallel processing thrust that is already pushing leading supercomputers and desktop machines.
According to NVIDIA, the company is seeking to branch out with CUDA, extending it to power mobile devices of all sizes in order to allow for a richer assortment of multimedia-heavy applications.
For example, NVIDIA’s CEO asked us imagine browsing in a wine shop for the right bottle for the night’s event. We can snap a photo of a bottle and an app will recognize it, interface with a service based in the cloud and return information about that wine, including pricing and drinking information.
As CEO Jen-Hsun Huang’s stated during this video interview with IDG, “All of a sudden this mobile device gives you the capabilities of Iron Man’s helmet. You’re looking through this mobile camera and information about your world is popping up…so computer vision and your mobile device connected to a supercomputer in the cloud allows us to provide that experience that I think is going to be utterly shocking.”
This is not something that we’ll see overnight; Huang estimates it’s still a couple of years away but he admits to being excited about building the infrastructure to support this movement toward more robust capabilities delivered via the cloud.
As CIO Magazine reported, NVIDIA’s focus in the mobile sphere is on its Arm-based Tegra processor, which is already being used in a number of tablet machines. Chances are good that number will be more apparent following the Consumer Electronics event in January.
Full story at YouTube (Huang Interview)
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.