November 18, 2010
NEW ORLEANS, SC10, November 17, 2010 --The OpenFabrics Alliance (OFA), an open source community delivering powerful I/O solutions, today announced that more than 44 percent of the just-released TOP500 list of the most powerful computers in the world utilize OpenFabrics Enterprise Distribution (OFEDTM) software for parallel computing, low-latency interconnects, and/or file-system operations. The OFA also announced dates and a new location for its 2011 OpenFabrics Alliance International Workshop; the event will be held April 3-6, 2011, in Monterey, CA.
OFED is being utilized by 218 systems on the TOP500, up from 182 in November 2009. OFED is being utilized by the majority of the Top100 with 61 percent. OFED’s optimization and performance capabilities provide systems with CPU efficiency as high as 96 percent. OpenFabrics software provides high-performance computing sites and enterprise data centers with flexibility and investment protection as computing evolves towards applications that require extreme speeds, massive scalability and utility-class reliability.
“OFED is experiencing growing adoption by vendors and users worldwide,” said Jim Ryan, chair of the OFA. “The increased adoption we’re seeing by systems on the TOP500 year to year validates the performance capabilities of OFED at the high end; we’re also seeing increased deployments in the enterprise data center where OFED demonstrates those same high-bandwidth, low-latency and low-CPU-utilization benefits.”
The TOP500 list (www.TOP500.org) is published twice a year and ranks the most powerful computers worldwide, providing valuable statistics for tracking trends in supercomputer performance and architectures.
OFA Announces Dates, New Location for 2011 International Workshop
The 2011 OpenFabrics Alliance International Workshop will be held April 3-6, 2011, in Monterey, CA. This 7th annual international workshop will explore new directions for OFED, including:
* Breaking through the Exascale barrier
* Defining OFED for the Cloud
* Enabling unprecedented efficiencies
The workshop is open to anyone – developers, technologists, supporters, end users, business professionals – with the desire to provide collaborative input into the direction and mechanics of OFED. Non-members welcome. For more information or to request a speaking opportunity, please contact email@example.com.
OFA at SC10 This Week
This week, OFA is exhibiting at SC10 in New Orleans in booth #1161, featuring interactive demos and presentations. Remote Direct Memory Access (RDMA)-based performance demos featuring OpenFabrics Enterprise Distribution (OFED):
* Bay Microsystems: Extending InfiniBand Globally
* Chelsio & Intel: iWARP Interoperability – Robust, Proven Low Latency Ethernet Clustering
* Mellanox: 3D Real-Time Visualization over InfiniBand
* Obsidian: Long haul encrypted InfiniBand over 10Gb Ethernet
* System Fabric Works: Cloud Computing for HPC Applications
Additionally, Paul Grun, chief scientist at System Fabric Works and member of the OFA’s steering committee, will be presenting a Birds of Feather on November 16 at 12:15 p.m. on the topic “RDMA over Converged Ethernet (RoCE) - Next Generation RDMA Network.” The session will present the concept and theory of RoCE and include a panel discussion.
About the OpenFabrics Alliance
The OpenFabrics Alliance (OFA) is a 501(c)(6) non-profit association that develops, tests, licenses and distributes the OpenFabrics Enterprise Distribution (OFED) – cross-platform, open-source software for high-performance, low-latency and energy-efficient computing. OFED is used in business, research and scientific environments that require fast and efficient networks, storage connectivity and parallel computing. OFED is free and is included in major Linux distributions, as well as Microsoft Windows. In addition to distributing OFED, the OFA conducts interoperability testing to ensure all releases meet multi-vendor enterprise requirements for security, usability and reliability. For more information about the OFA, visit www.openfabrics.org.
Source: OpenFabrics Alliance
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.