November 13, 2012
PROVO, Utah, Nov. 13 – Adaptive Computing, the largest provider of private cloud management and High-Performance Computing (HPC) workload management software, today announced the release of Moab HPC Suite 7.2. This new version brings together enhancements to accelerate productivity as workload demands increase and introduces scheduling abilities for Intel Xeon Phi coprocessors. Additional enhancements include the ability to schedule dual-domain jobs for heterogeneous Cray systems, as well as the ability to automate periodic usage budget resets, and improved RPM-based deployments.
Optimized Scheduling and Allocation for Intel Xeon Phi – The latest version of Moab was designed to recognize and work with the new Intel Xeon Phi coprocessors, based on the Intel Many Integrated Cores (MIC) technology. This ability to automatically detect Intel Xeon Phi coprocessors – and determine their location and availability – improves processor utilization to more intelligently schedule jobs and removes the need for extensive reprogramming to integrate Intel Xeon Phi coprocessors into existing systems. It also allows for policy-based scheduling, optimizing the choice of accelerators and coprocessors. As Intel Xeon Phi coprocessors are introduced into existing systems, this keeps costs and management efforts at a minimum, while maximizing utilization to ensure the most efficient job processing – by utilizing metrics including the number of cores and hardware threads, physical and memory available (total and free), max frequency, architect and load.
Dual-Domain Job Scheduling – Another new feature in Moab is the capability of dual-domain job scheduling for Cray systems. This allows for a single job to be run simultaneously on both Cray and non-Cray nodes, meaning users no longer have to submit two jobs. This is especially useful in research applications wherein different types of analysis are needed, such as multi-physics applications, allowing results to interact.
Period Allocations Reset Capability – Because resource allocations often need to be reset at regular intervals, without rollover, this feature simplifies the process – automatically resetting allocations to provide users with better transparency into usage for each predetermined period, helping them make intelligent budget decisions and eliminating the need to wait for reports.
Faster Installation – The new, streamlined RPM-based installation of the Moab HPC Suite means admins will spend less time in deployment. The new RPM install packages automate the install process for all components, including software dependencies.
“Adaptive Computing’s early support for systems incorporating Intel Xeon Phi coprocessors with the new Moab HPC Suite 7.2 demonstrates the industry’s anticipation for this type of solution,” said Bill Magro, director of Technical Computing Software Solutions at Intel. “The combination of Intel Xeon Phi coprocessors’ efficient performance and Moab’s workload management optimization policies enables customers to accelerate job performance and significantly shorten time to solution for key product and scientific discoveries.”
About Adaptive Computing
Adaptive Computing is the largest provider of High-Performance Computing (HPC) workload management software and manages the world’s largest cloud computing environment with Moab, a self-optimizing dynamic cloud management solution and HPC workload management system. Moab, a patented multidimensional intelligence engine, delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing offers a portfolio of Moab cloud management and Moab HPC workload management products and services that accelerate, automate, and self-optimize IT workloads, resources, and services in large, complex heterogeneous computing environments such as HPC, data centers and cloud.
Source: Adaptive Computing
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.