November 15, 2004
Chelsio Communications Inc exhibited its 10Gb Ethernet technology critical to the newly established world record for Ethernet LAN speed and distance. Chelsio's 10GbE leadership was showcased with a wide range of demonstrations and exhibits highlighting the high performance and low latency T110 10GbE Protocol Engine at the 2004 Supercomputing Conference.
Chelsio played a critical and central role in the record breaking 10Gigabit Ethernet link for the transmission of data from the University of Tokyo to the CERN research center in Geneva, Switzerland. Chelsio's T110 delivered sustained 7.57 Gbps throughput running standard 1500-byte Ethernet packets over a single TCP connection across the 18,500 km link using a uni-processor AMD Opteron system on each end of the connection. In the single stream stress test, 7.5 Gbps was sustained for nearly 7 hours, a critical capability for long-distance data backup applications.
"Achieving the world record in our CERN-Tokyo experiments required unique capabilities available with Chelsio's T110 protocol engine" said Kei Hiraki of the University of Tokyo. "The programmability and flexibility of the T110 was critical to our success, and this level of intelligence must be a standard feature of 10G adapters for data transfer on long fat-pipe networks."
"The goal of 10Gb Ethernet is simple: drive the convergence of networking, storage and cluster computing interconnect to the ubiquitous Ethernet, eliminating the need for proprietary and niche networking technologies. Chelsio is driving this convergence by matching the performance attributes, such as high throughput and low latency, of competing interconnects such as Fibre Channel and Infiniband," said Asgeir Eiriksson, chief technology officer for Chelsio. "To achieve this, the TCP termination function must be offloaded from the host CPU to a programmable TCP Offload Engine such as Chelsio's Terminator processor. Industry leading throughput, low latency and multiple other benefits of our Terminator architecture are available only from Chelsio's T110 adapter."
"Chelsio's T110 protocol engine consistently delivered very high performance during our testing, reliably sustaining a single stream TCP transfer rate of 7.5 gigabits per second for most of our 7-hour stress test," said Catalin Meriosu of CERN. "I was impressed by how quickly Chelsio's TCP offload engine regained top transfer rate in the presence of isolated packet loss, which demonstrates the stability and robustness of their product."
"Our success with the Tokyo to Geneva speed record is a testament to the flexibility and robustness of Chelsio's protocol offload adapter," said Kianoosh Naghshineh, president and CEO of Chelsio. "Chelsio's T110 protocol engine has proven its high performance and resiliency through several enterprise OEM qualifications, and has now demonstrated these capabilities in challenging topologies such as this transcontinental link."
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.