November 10, 2011
Moab technology plays integral role in NOAA's plan to create an organization-wide grid
PROVO, Utah, Nov. 10 — Adaptive Computing, managers of the world's largest supercomputing workloads and experts in HPC workload management, today announced that the National Oceanic and Atmospheric Administration (NOAA), in conjunction with Oak Ridge National Laboratory (ORNL) and Computer Sciences Corporation (CSC) has selected Moab HPC Suite as the intelligent grid resource management solution for existing and future NOAA HPC sites. During Supercomputing '11 in Seattle, Washington, NOAA will be presenting their MOAB application during a Birds of a Feather on November 17th @ 12:15 in room TCC 305. The Moab decision engine is the workload management software for Gaea, NOAA's new leadership class supercomputer, and as the standard for providing HPC grid functionality to all NOAA supercomputers. With Moab, NOAA gains a robust management infrastructure for compute jobs that unifies HPC resources across large geographic divides and maximizes job throughput and CPU utilization to deliver on the project's overall goal of developing better models for predicting climate variability and change.
In choosing a workload manager, one of NOAA's primary considerations was location-aware scheduling. NOAA's Geophysical Fluid Dynamics Laboratory (GFDL), located in Princeton, New Jersey supports their local researchers as well as other NOAA researchers across the country. Gaea is physically located at ORNL in Tennessee. The disparate locations of users and system(s), current and future, create challenges in networking, data transfer, and job submissions. Moab solves the job submission problem by allowing a local instance of Moab to be installed in New Jersey where users can interact with the system, manipulate their data sets, and analyze their results. Moab then communicates with, and migrates workload jobs and data between GFDL and the instance of Moab running on Gaea in Tennessee. This model can grow organically as new users and compute resources come online.
Moab is unique among workload managers as it can run on multiple resource managers. This capability is a crucial component to NOAA's goal of delivering a unified grid. On Gaea, NOAA plans to use Moab with TORQUE Resource Manager, a PBS-based open-source resource manager that is maintained and supported by Adaptive Computing.
"NOAA's mission is to understand and predict changes in the Earth's environment and we rely on supercomputing technologies like Moab to support the data-intensive research of our scientists," said Joseph Klimavicz, chief information officer and director of high performance computing and communications at NOAA. "We look forward to working with a well-established HPC software provider such as Adaptive Computing and are confident in the product's capabilities."
"We selected Adaptive Computing for NOAA's mission-critical deployment based on the company's proven Moab technology and its unique, location-aware functionality," said Steven Baxter, program manager at CSC.
NOAA is currently licensed to run Moab at three other HPC sites, including Boulder, Colorado and the $27.6 million supercomputing center in Fairmont, West Virginia. NOAA's long-term plan is to link the sites under a single HPC grid for global job submission and a single point of reporting.
"We are honored to play a critical role in supporting NOAA's ground-breaking climate research," said Robert Clyde, CEO of Adaptive Computing. "As HPC systems grow more complex, flexibility is a key component for any resource management solution. The latest upgrades to the Moab and Viewpoint technology enable the type of flexibility required for next-generation supercomputers."
Funded through the American Recovery and Reinvestment Act of 2009, Gaea will serve as a dedicated high performance computing resource for NOAA and its extensive network of research partners. This system will enable scientists to leverage a significant increase in computing capacity to address some of the most pressing global climate change questions. Moab manages more TOP500 CPUs than any other solution. It is experienced in managing large numbers of users in complex research environments, while simultaneously optimizing the utilization of petaflop-scale supercomputers.
About Adaptive Computing
Adaptive Computing, manages the world's largest supercomputing environments with its self-optimizing dynamic cloud management solutions and HPC workload management systems driven by Moab, a patented multi-dimensional decision engine. Moab delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing is the preferred dynamic cloud and workload management solution for the leading global HPC and datacenter vendors. For more information, call 801-717-3700 or visit www.adaptivecomputing.com.
Source: Adaptive Computing
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.