November 14, 2011
SEATTLE, Nov. 14 — Adaptive Computing, managers of the world's largest supercomputing systems as experts in HPC workload management and cloud management, and SGI, the trusted leader in technical computing, jointly announced today at Supercomputing '11 in booth # 927 that Adaptive Computing and SGI have signed an agreement for SGI to distribute the full line of Moab HPC Suite and Moab Cloud Suite. This business agreement was formalized in response to SGI's technical computing customers who require intelligent HPC workload management and cloud management to improve and better manage datacenter work flows. The patented Moab intelligence engine will support the SGI ICE family designed for today's data-intensive problems. This platform from SGI raises the efficiency bar, easily scaling to meet virtually any processing requirements without compromising ease of use, manageability or price/performance.
"Companies today are dealing with ever-growing amounts of data to manage and process, and are under constant pressure to deliver results in record time," said Christian Tanasescu, vice president of software engineering at SGI. "With the addition of Adaptive's HPC and Cloud workload management software, SGI delivers self-optimized, automated and cost-effective environments."
Adaptive Computing offers HPC workload management and cloud management software that provide predictive scheduling across workloads and resources to accelerate results delivery and maximize utilization while simplifying the management of complex, heterogeneous environments. Both the Moab HPC Suite and Moab Cloud Suite are powered by Moab, a patented multi-dimensional intelligence engine that is capable of automating: complex decisions and actions across workload requirements, heterogeneous resources and middleware, priorities and SLAs, and current and future time horizons to self-optimize cloud and HPC environments. Moab is the most scalable workload management architecture available, compatible with existing infrastructure such as the SGI ICE family, and extensible to manage environments as they grow and evolve to petaflop and beyond. "Adaptive Computing is committed to supporting SGI with HPC workload and cloud management so that the two companies can jointly back many of the world's largest and most robust academic and government computing systems," comments Dick Linville, Vice President of Business Development and Partner Solutions. Moab is battle-tested in managing TOP500 and Fortune 500 enterprise systems, including the world's largest, most scale-intensive and complex HPC systems including about 40 percent of the top 10, top 25 and top 100 systems based on rankings from http://www.Top500.org.
SGI, the trusted leader in technical computing, is focused on helping customers solve their most demanding business and technology challenges. Visit sgi.com for more information.
About Adaptive Computing
Adaptive Computing, manages the world's largest supercomputing environments with its self-optimizing dynamic cloud management solutions and HPC workload management systems driven by Moab, a patented multi-dimensional intelligence engine. Moab delivers policy-based governance, allowing customers to consolidate and virtualize resources, allocate and manage applications, optimize service levels and reduce operational costs. Adaptive Computing is the preferred dynamic cloud and workload management solution for the leading global HPC and datacenter vendors. For more information, call (801) 717-3700 or visit http://www.adaptivecomputing.com.
Source: Adaptive Computing; SGI
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.