August 14, 2006
The Advanced Simulation and Computing Program (ASC) unites the high performance computing expertise and capabilities of the national labs responsible for ensuring the safety, security and reliability of the nation's stockpile of nuclear weapons without testing. ASC, also known as Tri-Labs consists of Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL) and Sandia National Laboratories. ASC currently has about 25 percent of the world's fastest computers.
"Cluster Resources is honored to be selected by ASC," said David Jackson, CEO of Cluster Resources Inc. "There is no organization in the world which matches the technical expertise and scope of compute systems found at ASC in terms of scalability and architectural complexity."
This agreement brings two industry leaders together. ASC is widely acknowledged for their leadership in successfully deploying next-generation massive architectures, networks and storage solutions, as well as their research and expertise in scalable middleware. Cluster Resources provides leadership in intelligent workload and resource management that orchestrates compute, network and storage resources, in order to maximize utilization, availability and responsiveness. The ASC/Cluster Resources partnership will push innovation boundaries for the supercomputing/High-Performance Computing (HPC) industry on both current and future leadership-class systems.
ASC initiated the search for a common resource and workload management solution to improve usability and manageability of their diverse resources and to attain an improved return on their significant computing investment. In addition, the program also sought enhanced reporting for managed resources and to optimize resource utilization while maintaining the flexibility required to meet the individual needs of each site and project. ASC has a highly heterogeneous environment with systems that range from large scale Intel and AMD Opteron-based systems provided by IBM, HP, Dell and others, to more exotic and powerful systems such as Cray's XT3 and IBM's Blue Gene. Going into the assessment, ASC also had a high degree of knowledge in the resource management space due to their development of advanced resource management and scheduling tools such as BProc, SLURM (www.llnl.gov/linux/slurm/) and LCRM.
"ASC's expertise, from their own extensive research and development work and from managing the world's largest array of leadership class systems, makes this review and selection a great honor," Jackson said. "What makes this selection so meaningful is that this organization knows supercomputing, knows the real world and is able to see through the marketing fluff that can be so prevalent. Not only does this speak well of Cluster Resources' Moab product line and our service capabilities, but it also provides significant value to us as we collaborate with these thought leaders to develop capabilities for the next generation of systems and enhance our ability to meet their current and future needs."
The awarded contract grants ASC use of Moab software, which provides workload management, system accounting, capacity planning, automated failure recovery, virtualization and a host of other capabilities in cluster, Grid and utility computing environments. In addition, the contract also includes collaborative research and development, consulting, 24x7 support and other professional services.
The Moab solution adds significant manageability and optimization to HPC resources, while providing deployment methods that effectively minimize the risk and cost of adoption. Unique Moab capabilities allow it to be transparently deployed with little or no impact on the end-user; these capabilities include system workload, resource, and policy simulation, batch language translation, capacity planning diagnostics, non-intrusive test facilities and infrastructure stress testing.
At the core of this solution is Moab Cluster Suite and Moab Grid Suite -- professional cluster management solutions that include Moab Workload Manager, a policy-based workload management and scheduling tool, as well as a graphical cluster administration interface and a Web-based end-user job submission and management portal.
Moab simplifies and unifies management across heterogeneous environments to increase the ROI of HPC investments and act as a flexible policy engine that guarantees service levels and speeds job processing.
A second key aspect of the delivered solution is service and personnel engagement. Cluster Resources will actively collaborate with ASC on training, consulting, migration and the creation of development roadmaps in order to ensure the highest degree of capability and scalability is provided. This relationship includes direct access to development resources and executive level engagement. Cluster Resources will actively work with hardware vendors to ensure Moab cleanly deploys on selected current and newly purchased systems. Cluster Resources will also fully support ASC throughout the usage lifecycle, providing on-site and online training, best-practices consulting and other enabling services.
"Partnerships such as this one are a key element of the ASC Program's success in pushing the frontiers of high-performance scientific computing," said Brian Carnes, service and development division leader at LLNL. "Only by working with leading innovators in HPC can we develop and maintain the large scale systems and increasingly complex simulation environments vital to our national security missions."
The relationship between ASC and Cluster Resources will not only directly impact the three government laboratories that make up ASC/Tri-Labs, but will also help shape the future of large and small HPC sites.
"In many regards, what ASC is doing now reflects the future state of the data center and HPC industry," Jackson said. "However, the fundamental needs of ASC are not all that different from today's centers. They need total optimization of compute, network and storage resources, automated failure detection and recovery, more flexible policies, true visualization of cluster activity, detailed accounting and reduced costs. It's just when you are dealing with over 100,000 processors, the approaches used to deliver this must become more efficient and manageable. We are fortunate that our collaboration with industry visionaries over the years has prepared us to address these needs in a way that works extremely well both at 100 and 100,000 processors. In our partnership with ASC, we hope to extend these capabilities further in environments that push the edges of scalability and capability."
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.