HPC in the Cloud's white paper database contains reports from the leading thought-leaders and idea generators in the Cloud industry.
Progress in Parallel: the Bull Parallel Programming Center
Release Date:April 15, 2013
“50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Case Study: Allinea DDT Resolves an Unsolved Mystery at Argonne National Laboratory
Release Date:June 11, 2013
Allinea DDT helps scientists find an “impossible” bug that showed up at over 16,000 cores at the Argonne Leadership Computing Facility.
Cutting-edge Test Bed Cluster Architecture Based on Intel® Xeon Phi™ Coprocessor
Release Date:May 22, 2013
As part of the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing program, Sandia National Laboratories is addressing a critical need for experimental architecture test beds to support path-finding explorations of alternative programming models. The transition from single-core to multicore processor technology and the advent of heterogeneous compute node architectures and accelerators — coupled with the continually increasing demand for more computing cycles — has made Sandia explore cutting-edge technology changes to their high performance computing (HPC). The Cray CS300-AC™ cluster supercomputer was used for these test bed configurations. Based on industry-standard, optimized and flexible server platforms, the Cray CS300-AC system offers cutting-edge technologies designed to increase performance while reducing power consumption.
Improving HPC Cluster Efficiency with 480V Power Supplies
Release Date:May 22, 2013
Many of the servers used in computers on the HPC TOP500® list use 208Vpower supplies and very inefficient air cooling solutions. Some of these systems have tens of thousands of servers. This represents millions of dollars in wasted electric power and added equipment infrastructure costs. In today’s energy sensitive environment, these practices need to change. The switch to 480/277V power supplies for the next generation of high performance clusters is a small change with large rewards. Cray offers 480V power supplies with the Cray CS300™ cluster supercomputer series. This feature has been driven by requirements from customers such as U.S. national labs and the U.S. Department of Defense for more efficient power supplies and power distribution systems.
HPC Software Requirements to Support an HPC Cluster Supercomputer
Release Date:May 22, 2013
In this paper you will have a complete overview of the essential cluster software and management tools that are required to build a powerful, flexible, reliable and highly available Linux supercomputer. You will learn that Cray combines all the required HPC software tools including operating systems, provisioning, remote console/power management, cluster monitoring, parallel file system, scheduling, development tools and performance monitoring tools with key compatibility and additional powerful features of the Advanced Cluster Engine™ (ACE) management software to deliver a best-of-breed HPC cluster software stack to support its Cray CS300 Cluster Supercomputer product line. Cray has also taken a customer-centric, technology-agnostic approach that offers the customer a wide range of hardware and software configurations based on the latest open standards technologies, innovative cluster tools and management software packaged with HPC professional services and support expertise.
Affordable Big Data Computing
Release Date:May 6, 2013
The mainstreaming of Big Data is an important transformational moment in computing. Traditional clusters based on distributed memory cannot adequately handle the growing crush of data. Shared memory approaches are required. Learn how Numascale has developed a technology, NumaConnect, which turns a collection of standard servers with separate memories and IO into a unified system that delivers the functionality of high-end enterprise servers and mainframes at a fraction of the cost.
Case Study: Developer gets performance boost with Allinea MAP and frees time for PhD
Release Date:May 8, 2013
Computer scientists have the skills and the passion for digging into code for problems. Yet even they turn to Allinea MAP to save time and communicate results to domain scientists and code engineers.
Building a Data-Intensive Supercomputer Architecture for the National Research Community
Release Date:May 3, 2013
The Gordon supercomputer proje ct began in 2009 as a proposal from the San Diego Supercomputer Center ( SDSC ) to the National Science Foundation (NSF) to build a data - intensive supercomputer . B ased on the Cray CS300 - AC™ cluster supercomputer, t he propos ed system w as innovative in several respects : use of high performance solid - state drives (SSD) , very large memory nodes , a very high performance parallel fi le system, and a du al - rail 3D torus interconnect.
Case Study: Allinea DDT Helps Drive the Evolution of Geoscientific Model Development
Release Date:April 1, 2013
Creating a holistic geoscientific model is complicated enough. So when scientists have to debug their computer code, they turn to Allinea DDT, a tool easy enough for undergraduates to use.
High-Performance Computing in Action
Source: HP and Intel
Release Date:March 7, 2013
Businesses that want to be on the cutting edge of their industries are increasingly turning to high-performance computing (HPC) solutions to handle complex compute processes and speed up their rate of innovation. Download this Executive Brief to see how businesses in energy, life sciences and entertainment put HPC solutions to work in their operations.