HPC in the Cloud's white paper database contains reports from the leading thought-leaders and idea generators in the Cloud industry.
Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA
Release Date:October 17, 2012
The trends toward higher-fidelity modeling and increasing product complexity are lengthening execution times for many simulation users. Multicore chips and inexpensive computing clusters has made parallel computing more affordable than in the past. Using Abaqus on a Windows-based multicore workstation provides an integrated, efficient, scalable platform to solve the most complex simulation and multiphysics analysis equations in less time. This white paper helps FEA users determine whether cluster computing makes sense for them and which factors to consider when implementing an HPC platform/configuration.
8 Steps: Optimizing Cache Memory Access & Application Performance
Source: Rogue Wave Software
Release Date:October 1, 2012
The workflow in this whitepaper outlines how to optimize an application for good cache memory performance, create faster applications, find more optimization opportunities and avoid unnecessary work. It provides a solid framework in which developers can work to optimize the performance of key parts of an application. The steps described, from optimizing general access patterns to advanced techniques to multithreading concerns, cover all of the areas where cache memory bottlenecks can occur.
Data in Motion: A New Paradigm in Research Data Lifecycle Management
Source: University of Miami
Release Date:September 17, 2012
In today’s world of scientific discovery whether it involves climate change, genomics or economic trends – finding ways to manage layers of voluminous data in various formats presents new and continuing challenges. Our center developed a novel, yet simply designed four-tiered data storage and management approach. This paper presents the complex set of interrelated challenges to managing ever-growing data with an integrated system that considers data accessibility, interoperability, expansion and flexibility.
Tackling the Data Deluge: File Systems and Storage Technologies
Release Date:June 25, 2012
A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.
Exploring the Potential of Heterogeneous Computing
Release Date:April 2, 2012
Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.
The IT Data Explosion Is Game Changing for Storage Requirements
Release Date:June 4, 2012
Data-intensive computing has been an integral part of high-performance computing (HPC) and other large datacenter workloads for decades, but recent developments have dramatically raised the stakes for system requirements — including storage resiliency. The storage systems of today's largest HPC systems often reach capacities of 15–30PB, not counting scratch disk, and feature thousands or tens of thousands of disk drives. Even in more mainstream HPC and enterprise datacenters, storage systems today may include hundreds of drives, with capacities often doubling every two to three years. With this many drives, normal failure rates can mean that a disk is failing somewhere in the system often enough to make MTTF a serious concern at the system level.
Solving Agencies Big Data Challenges: PED for On-the-Fly Decisions
Release Date:March 12, 2012
With the growing volumes of rich sensor data and imagery used today to derive meaningful intelligence, government agencies need to address the challenges posed by these “big” datasets NetApp provides a scalable, unified single pool of storage to better handle your processing and analysis of data to drive actionable intelligence in the most demanding environments on earth.
Introducing LRDIMM – A New Class of Memory Modules
Release Date:January 17, 2012
This paper introduces the LRDIMM, a new type of memory module for high capacity servers and high-performance computing platforms. LRDIMM is an abbreviation for Load Reduced Dual Inline Memory Module, the newest type of DIMM supporting DDR3 SDRAM main memory. The LRDIMM is fully pin compatible with existing JEDEC-standard DDR3 DIMM sockets, and supports higher system memory capacities when enabled in the system BIOS.
Debugging CUDA-Accelerated Parallel Applications with TotalView
Source: Rogue Wave Software
Release Date:November 21, 2011
CUDA introduces developers to a number of new concepts (such as kernels, streams, warps and explicitly multi-level memory) that are not encountered in serial or other parallel programming paradigms. In addition, CUDA is frequently used alongside MPI parallelism and host-side multi-core and multi-thread parallelism. The TotalView parallel debugger provides developers with methods to handle these CUDA-specific constructs, as well as an integrated view of all three levels of parallelism within a single debugging session.
Effectively applying high-performance computing (HPC) to imaging
Release Date:January 9, 2012
Applications with image resolutions, data rates and analysis requirements that exceed the capabilities of a typical workstation computer continue to exist to this day. Moreover, developers must decide how to select and best use the processing technologies – multi-core CPU, GPU and FPGA – at their disposal. The suitability of a scalable heterogeneous computing platform for demanding applications will be examined by way of a representative scenario.