HPC in the Cloud's white paper database contains reports from the leading thought-leaders and idea generators in the Cloud industry.
Exploring the Potential of Heterogeneous Computing
Release Date:April 2, 2012
Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.
The IT Data Explosion Is Game Changing for Storage Requirements
Release Date:June 4, 2012
Data-intensive computing has been an integral part of high-performance computing (HPC) and other large datacenter workloads for decades, but recent developments have dramatically raised the stakes for system requirements — including storage resiliency. The storage systems of today's largest HPC systems often reach capacities of 15–30PB, not counting scratch disk, and feature thousands or tens of thousands of disk drives. Even in more mainstream HPC and enterprise datacenters, storage systems today may include hundreds of drives, with capacities often doubling every two to three years. With this many drives, normal failure rates can mean that a disk is failing somewhere in the system often enough to make MTTF a serious concern at the system level.
Solving Agencies Big Data Challenges: PED for On-the-Fly Decisions
Release Date:March 12, 2012
With the growing volumes of rich sensor data and imagery used today to derive meaningful intelligence, government agencies need to address the challenges posed by these “big” datasets NetApp provides a scalable, unified single pool of storage to better handle your processing and analysis of data to drive actionable intelligence in the most demanding environments on earth.
Introducing LRDIMM – A New Class of Memory Modules
Release Date:January 17, 2012
This paper introduces the LRDIMM, a new type of memory module for high capacity servers and high-performance computing platforms. LRDIMM is an abbreviation for Load Reduced Dual Inline Memory Module, the newest type of DIMM supporting DDR3 SDRAM main memory. The LRDIMM is fully pin compatible with existing JEDEC-standard DDR3 DIMM sockets, and supports higher system memory capacities when enabled in the system BIOS.
Debugging CUDA-Accelerated Parallel Applications with TotalView
Source: Rogue Wave Software
Release Date:November 21, 2011
CUDA introduces developers to a number of new concepts (such as kernels, streams, warps and explicitly multi-level memory) that are not encountered in serial or other parallel programming paradigms. In addition, CUDA is frequently used alongside MPI parallelism and host-side multi-core and multi-thread parallelism. The TotalView parallel debugger provides developers with methods to handle these CUDA-specific constructs, as well as an integrated view of all three levels of parallelism within a single debugging session.
Effectively applying high-performance computing (HPC) to imaging
Release Date:January 9, 2012
Applications with image resolutions, data rates and analysis requirements that exceed the capabilities of a typical workstation computer continue to exist to this day. Moreover, developers must decide how to select and best use the processing technologies – multi-core CPU, GPU and FPGA – at their disposal. The suitability of a scalable heterogeneous computing platform for demanding applications will be examined by way of a representative scenario.