HPC in the Cloud's white paper database contains reports from the leading thought-leaders and idea generators in the Cloud industry.
Map/Reduce on Lustre®
Release Date:August 30, 2011
This paper compares HDFS and Lustre architectural drivers and resulting system performance of Map/Reduce computations on HPC hardware. We evaluate theoretical and actual performance of Lustre and HDFS for a variety of workloads in both traditional and Map/Reduce-based applications. Further, we examine the additional benefits (cluster efficiency, flexibility, and cost) of using a general-purpose distributed file system, such as Lustre, on a Hadoop compute cluster.
Xyratex Lustre® Priorities Architectural Overview
Release Date:August 30, 2011
This paper summarizes a technical approach for the Lustre® community to consider implementing new features that will contribute to dramatically increasing Lustre’s stability, scalability, and performance. Xyratex is committed to community collaboration to move Lustre forward. The purpose of this document is to provide some transparency and insight into Xyratex’s architectural thoughts to extend Lustre capabilities and to stimulate discussion within the community.
Cracking the Parallelism Puzzle - An innovative solution for C/C++ and Fortran developers
Release Date:June 27, 2011
Read “Cracking the Parallelism Puzzle” to learn about a comprehensive solution for task parallelism, data parallelism, and vectorization, with interfaces implemented using both language extensions and libraries. Get free article!
Clustered File Systems: Not just a "traditional" HPC problem
Release Date:June 23, 2011
The exponential growth of file data generated by newer workloads are forcing increased storage innovation. This white paper explores use case scenarios where enterprise-grade file systems are needed to drive real business value.
Cache Optimization: Does It Matter? Should I Worry? Why?
Source: Rogue Wave
Release Date:June 13, 2011
How much faster could your program be if it were written in such a way that cache access is considered and optimized? Caching is done automatically regardless of how you program. This white paper is a must read if you have not considered how memory caches impact application performance.
IBM System x iDataPlex and eX5 Servers with Accelrys' Discovery Studio 3.0
Release Date:May 23, 2011
In life sciences, the ability to analyze vast amounts of data in a scalable, reliable and low cost computing environment is critical to developing better drug candidates, reducing attrition in clinical trials and taking drugs to market faster.
Finding Hard-to-Reproduce Bugs with Reverse Debugging
Source: Rogue Wave Software
Release Date:May 11, 2011
A look at the challenges presented by parallel debugging and the value of a reverse capability that enables a developer to examine not just the current state of the program, but to follow its logic backward in execution time from the point of failure.
Optimize Performance with IBM GPFS Parallel Data Manager
Release Date:March 14, 2011
Explosions of data, transactions, and digitally-aware devices are straining IT infrastructure and operations, while storage costs and user expectations are increasing. The IBM General Parallel File System™ (GPFS™), high-performance enterprise file management, can help you move beyond simply adding storage to optimizing data management.
How to Make Immediate Profits on Service & Support, Instead of it Costing You Money
Source: Source Support
Release Date:April 1, 2011
This executive summary contains important information that will help IT manufacturers and resellers compete with, and beat, the Tier I companies. Inside: how to reduce or eliminate reverse logistics, depot repair, and RMA costs while simultaneously growing warranty revenue by 25-30%. End-users receive first-class service. Details provided on how to execute the program with no upfront investment.
Meeting Unstructured Data Storage Requirements with Scale-Out Storage
Release Date:February 25, 2011
Read ESG White Paper: How scale-out storage can help create a storage backbone that can handle the tremendous challenges of unstructured data growth. These systems can scale to multiple petabytes under a single system image, making them an ideal consolidation platform.