HPC in the Cloud's white paper database contains reports from the leading thought-leaders and idea generators in the Cloud industry.
SFA12K High Performance Solutions for Big Data: Setting the Bar in Both Bandwidth & lOPS
Source: DataDirect Networks
Release Date:December 10, 2012
Fusing unprecedented IOPS and bandwidth performance with highly efficient capacity management, the SFA12K is a high-performance storage platform for parallel file serving and deep data archival for data-intensive HPC applications.
Counting on Performance: NetApp’s Expanded High-Throughput Storage Portfolio
Release Date:December 3, 2012
Find out from Addison Snell how the use of HPC technologies to gain competitive advantage in non-scientific business applications is growing rapidly, compounded by new trends in Big Data. Big Data does not represent one particular application, but rather a set of trends that is fueling the need for higher-performing storage infrastructures in analytics-related areas in many enterprise vertical markets.
Solving Sparse Convex Quadratic Programming Problems with IMSL
Source: Rogue Wave Software
Release Date:November 26, 2012
Quadratic programming has a variety of applications, such as resource planning, portfolio optimization, and structural analysis. Download this technical whitepaper on the sparse convex quadratic programming solver in the IMSL C Numerical Library.
Accelerating and Simplifying Apache™ Hadoop® with Panasas® ActiveStor™
Release Date:November 19, 2012
Hadoop is an essential platform to support big data analytics applications. It was designed to be highly scalable, utilizing commodity hardware and parallel processing to achieve performance and data protection. Many of the design aspects of the Hadoop File System (HDFS) are fundamentally very similar to Panasas® PanFS™. This means companies that have already invested in compute clusters for other big data workloads can now run Hadoop on existing compute infrastructure in conjunction with Panasas ActiveStor.
High Performance and High Throughput, Scalable Cluster Solutions for Next Generation Sequencing
Release Date:November 12, 2012
High-throughput genome sequencing, or next-generation genome sequencing (NGS), is being driven by the high demand for low-cost sequencing. NGS parallelizes the sequencing process, producing thousands or millions of sequences at once.
High Performance Scalable, Unified Storage
Release Date:November 5, 2012
This paper presents Raid Inc,’s unified, scalable structured storage solution. It incorporates and illustrates features of the unified storage model, including replication for disaster recovery, high availability and failover. In addition to delivering sustained performance and scalability, the solution provides the building blocks for the HPC industry’s lowest price-to-performance offerings.
Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA
Release Date:October 17, 2012
The trends toward higher-fidelity modeling and increasing product complexity are lengthening execution times for many simulation users. Multicore chips and inexpensive computing clusters has made parallel computing more affordable than in the past. Using Abaqus on a Windows-based multicore workstation provides an integrated, efficient, scalable platform to solve the most complex simulation and multiphysics analysis equations in less time. This white paper helps FEA users determine whether cluster computing makes sense for them and which factors to consider when implementing an HPC platform/configuration.
8 Steps: Optimizing Cache Memory Access & Application Performance
Source: Rogue Wave Software
Release Date:October 1, 2012
The workflow in this whitepaper outlines how to optimize an application for good cache memory performance, create faster applications, find more optimization opportunities and avoid unnecessary work. It provides a solid framework in which developers can work to optimize the performance of key parts of an application. The steps described, from optimizing general access patterns to advanced techniques to multithreading concerns, cover all of the areas where cache memory bottlenecks can occur.
Data in Motion: A New Paradigm in Research Data Lifecycle Management
Source: University of Miami
Release Date:September 17, 2012
In today’s world of scientific discovery whether it involves climate change, genomics or economic trends – finding ways to manage layers of voluminous data in various formats presents new and continuing challenges. Our center developed a novel, yet simply designed four-tiered data storage and management approach. This paper presents the complex set of interrelated challenges to managing ever-growing data with an integrated system that considers data accessibility, interoperability, expansion and flexibility.
Tackling the Data Deluge: File Systems and Storage Technologies
Release Date:June 25, 2012
A single hour of data collection can result in 7+ million files from just one camera. Collection opportunities are limited and must be successful every time. As defense and intelligence agencies seek to use the data collected to make mission-critical battlefield decisions, there’s greater emphasis on smart data and imagery collection, capture, storage and analysis to drive real-time intelligence. The data gathered must accurately and systematically be analyzed, integrated and disseminated to those who need it – troops on the ground. This reality leads to an inevitable challenge – warfighters swimming in sensors, drowning in data. With the millions, if not billions, of sensors providing all-seeing reports of the combat environment, managing the overload demands a file system and storage infrastructure that scales and performs while protecting the data collected. Part II of our whitepaper series highlights NetApp’s scalable, modular, and flexible storage solution to handle the demanding requirements of sophisticated ISR environments.