November 15, 2004
A team of high-performance computing engineers from the San Diego Supercomputer Center (SDSC) and IBM demonstrated expert management of large-scale data resources using innovative cyberinfrastructure tools at the 2004 Supercomputing Conference in Pittsburgh. Using StorCloud SAN-attached storage and the General Parallel File System (GPFS) from IBM, along with computation and visualization resources at various TeraGrid sites, a new computation and visualization was displayed to attendees at the conference. With these tools, Enzo scientists were able to see the process of massive star formation and destruction.
"To achieve the promise of Grid computing, high-performance computing applications need coordinated access to the set of resources that comprise cyberinfrastructure -- superior compute platforms, on-demand remote data access, visualization tools and access to archival storage," said Fran Berman, director of SDSC. "The TeraGrid cyberinfrastructure offers these distinctive resources to high-performance applications."
The SDSC/IBM team was awarded with the highest achieved StorCloud Bandwidth and I/Os per second for the Enzo submission. As part of the submission, the team also broke a world record by sorting a terabyte of random data in 487 seconds (8 minutes, 7 seconds), more than twice as fast as the previous record 1,057 seconds (17:37). The bandwidth achieved was 15GB per second.
The team also received the Best Spirit of the SCinet Bandwidth Challenge Award for enabling a scientific application to achieve 27 Gb per second over the TeraGrid network, utilizing more than 95 percent of the available bandwidth.
This computation illustrates how a scientist can schedule a computation and visualization in automatic succession at different sites using the Grid Universal Remote metascheduler without moving any files from one site to another. A global parallel file system that spans sites allows data to be shared without duplicating the hardware and data at each site, which makes a cost effective, high performance solution for partner sites. No matter where users go throughout the Grid, the files are available at any site mounting the file system.
Also demonstrated was an important component of cyberinfrastructure. Using the Grid Universal Remote developed by SDSC team members, engineers were able to reserve resources across distributed sites in a coordinated fashion. User-settable reservations at SDSC and Purdue University provided the framework to make this possible.
The Grid Universal Remote allows users direct access to local cluster scheduling, within policy limits. Previously, this was only possible with manual intervention by system administrators.
"Our vision is to provide scientists with an easy-to-use, seamless environment that allows them to utilize all the unique distributed resources available on the Grid," said Berman. "The TeraGrid team really stepped up to the place on this challenge, providing an unprecedented level of team technology coordination."
Resources used included 120TB of IBM TotalStorage DS4000 (FAStT) storage systems as well as 80 processors serving out storage and data from the showroom floor to NCSA and SDSC. Computation was done on SDSC's premier high-performance compute system, DataStar.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.