November 13, 2012
PROVO, Utah, Nov. 13 – Adaptive Computing, the largest provider of private cloud management and High-Performance Computing (HPC) workload management software, today announced that the COSMOS Supercomputer Consortium, founded by Stephen Hawking and part of the Science and Technology Facilities Council DiRAC High Performance Computing facility, has chosen Moab HPC Suite 7.2 to manage its groundbreaking scientific computing workloads. Moab will coordinate jobs and allocate computing resources for research in cosmology and astrophysics, including simulations of the origins of the Universe and science exploitation of satellite experiments. This research will utilize a new SGI UV 2000 supercomputer with 1,856 Intel Xeon E5 cores and 1,891 Intel Xeon Phi cores. Adaptive Computing has worked closely with Intel and SGI to enable Moab to manage and schedule this cutting-edge system.
DiRAC, the Distributed Research utilizing Advanced Computing facility, is the leading provider of high-performance computing in the UK. With its new updated systems, the COSMOS@DiRAC supercomputer will be providing supercomputing services not only to DiRAC’s consortium of educational institutions, but also to other organizations throughout the UK. With a more diverse group of customers, scheduling and accounting effectively is especially critical. With a total of more than 4,500 cores once the new system is fully operational, scheduling and management is a top priority. Moab will provide them with the ability to specifically schedule what cores will be used for jobs, in order to meet SLAs for flagship projects.
“We had scientists using custom-built tools to manage jobs, but as we expand and support more sophisticated workloads and detailed accounting this has become too complex and time-consuming a task. Moab HPC Suite’s ease of use has streamlined our scheduling requirements, allowing us to accommodate our expanding user group Moab enables us to maintain flexibility and enjoy more rigorous accounting abilities with Moab Accounting Manager and to fine-tune policies easily in real time,” said Andrei Kaliazin, COSMOS System Manager, University of Cambridge. “Research in fundamental cosmology is fast moving and internationally competitive. We have to adapt our flexible operating model rapidly, and we need a company breaking new ground to support the very latest HPC technologies, thus we selected Adaptive Computing for our workload management software,” added Professor Paul Shellard, COSMOS Director.
“With the introduction of the Intel Xeon Phi technology, we’re seeing a new generation of supercomputers that are faster and more agile than ever,” noted Robert Clyde, CEO of Adaptive Computing. “Adaptive is proud to offer Intel Xeon Phi capability in its latest version of Moab HPC Suite, to allow today’s HPC centers to take full advantage of Intel Xeon Phi cores without the need for extensive reprogramming of their systems.”
The COSMOS@DiRAC upgrade is made possible through funding from the Science and Technologies Facilities Council, a public body of the Department of Business, Innovation and Skills. The SGI UV2000 system will be the first of its kind operating in the world with Intel Xeon Phi co-processors integration.
About Adaptive Computing
Adaptive Computing is the largest provider of High-Performance Computing (HPC) workload management software and manages the world’s largest cloud computing environment with Moab, a self-optimizing dynamic cloud management solution and HPC workload management system. Moab, a patented multidimensional intelligence engine, delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing offers a portfolio of Moab cloud management and Moab HPC workload management products and services that accelerate, automate, and self-optimize IT workloads, resources, and services in large, complex heterogeneous computing environments such as HPC, data centers and cloud.
Source: Adaptive Computing
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.