December 06, 2004
Chelsio Communications Inc provided the critical technology to set new world records for LAN bandwidth and distance, dominating the annual Bandwidth Challenge competition at the Supercomputing 2004 conference in Pittsburgh earlier this month.
The University of Tokyo shattered the world record for speed and distance of Internet communication, sending standard 1500 byte Ethernet packets more than 31,000 kilometers at more than 7 Gb per second totaling 225,000TB meters per second to win the SC2004 Bandwidth Challenge award for single stream distance and bandwidth.
The Stanford Linear Accelerator Center (SLAC), together with its partners, achieved the highest aggregate bandwidth ever recorded to win the SC2004 Bandwidth Challenge award for sustained bandwidth, delivering 101 Gb per second, which quadruples the previous throughput record set a year ago.
Chelsio's T110 10GbE Protocol Engine provided the TCP/IP offload technology breakthrough used in both the University of Tokyo's and SLAC's bandwidth challenge demonstrations, dramatically improving application performance. By offloading processor-intensive networking and storage protocol stacks from overburdened processors, Chelsio can process 10 Gigabit Ethernet and return processing cycles to the application to enhance overall system performance. With an application-to-application latency of less than 10 microseconds, the T110 adapter enables 10Gb Ethernet to be deployed in a wider range of data center and Grid computing applications in the enterprise. The ubiquitous Ethernet technology will quickly make possible a dramatically lower total cost of ownership by leveraging existing resources, such as applications and management tools, and IT networking expertise.
The University of Tokyo established a new distance and speed record, sending standard 1500 byte Ethernet packets 31,248 kilometers from the exhibition in Pittsburgh through Tokyo to the CERN research facility in Geneva, Switzerland. This demonstration shatters the previous record by more than 80 percent.
"The TCP/IP offload capability of Chelsio's T110 is the first in the world to enable very high-speed, reliable TCP data transfer between very distant places. Our world record achievement could not have been realized without the flexibility and reliability of the T110 protocol engine," said Professor Kei Hiraki, chairman of the Computer Science Department at the University of Tokyo and leader of the Japanese Data Reservoir project. "Achieving such high performance using standard 1500-byte Ethernet packets is very important for practical use of this technology since use of larger, non-standard Jumbo packets on the internet has compatibility and reliability issues."
The data transfer was achieved between a pair of data-sharing Opteron systems from the Data Reservoir project, one server placed at the SC2004 exhibition booth of the University of Tokyo and another at CERN, each equipped with a Chelsio T110 10 Gigabit Ethernet adapter supporting TCP/IP offload. A transfer rate of 7.21 Gbps was sustained for more than 15 minutes using a single TCP stream and standard 1500-byte Ethernet frames over the 31,248 kilometer link.
The combined bandwidth times distance value is a new world record at 225,298 terabit meters per second and is 80 percent greater than the Internet2 Land Speed Record of 124,935 terabit meters per second. At this transfer rate and distance, a full-length DVD can be transferred anywhere on the earth in less than five seconds.
SLAC teamed up with CalTech, Fermilab, CERN, the University of Manchester, Sun Microsystems and Chelsio to aggregate and transfer large sets of data. Using a total of ten 10 gigabit links, the team successfully transferred an aggregate of 101 Gbps of data to many host labs and research institutions around the world. This shattered the previous record of 23 Gb per second, set at the SC2003 conference, and exceeded the sum of all the throughput marks submitted in the present and previous years by other Bandwidth Challenge entrants. With 101Gbps throughput one could transfer all the books and other printed materials at the Library of Congress in under 15 minutes, or the equivalent of three full-length DVD movies in about one second.
"Our tests showed that the TCP Offload Engine (TOE) interfaces performed reliably on uncongested local and trans-continental networks, achieving TCP throughputs with 1500 byte packets that were limited only by the bus bandwidth of the computers. Using two computers at each end, we were able to utilize all the bandwidth of a 10 gigabit per second path. The TOE also successfully reduced the CPU utilization by about a third compared to a non-TOE 10 Gb per second network interface," said Dr. Les Cottrell, assistant director of SLAC's Computing Services and leader of the SLAC Bandwidth Challenge team.
Chelsio's T110 10 Gigabit Ethernet adapters with full TCP/IP offload were used on many of the links and achieved 7.72 Gb per second throughput in one direction on one of the links.
The experiment provided a preview of the globally distributed Grid system that is now being developed in the US and Europe in preparation for the next generation of high energy physics experiments at CERN's Large Hadron Collider (LHC), scheduled to begin operation in 2007. The largest physics collaborations at the LHC, CMS and ATLAS, each encompass more than 2,000 physicists and engineers from 160 universities and laboratories spread around the globe. Optical networks incorporating multiple 10 Gigabit per second links are the foundation of the Grid system that will drive new scientific discoveries and lead to new models for how research and business is performed. Scientists will be empowered to form "virtual organizations" on a planetary scale, sharing in a flexible way their collective computing and data resources, leading to the deployment of a new generation of revolutionary Internet applications.
"This is a breakthrough for the development of global networks and Grids, as well as inter-regional cooperation in science projects at the high energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 Gbps range can be used effectively over long distances," said Harvey Newman, professor of physics at Caltech. "There are profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.