November 20, 2006
A first-ever demonstration of 100 Gigabit Ethernet (100 GbE) technology by a team of industry partners, including Finisar, Infinera, Internet2, Level 3 Communications, and University of California at Santa Cruz, shows that 100 GbE technology is viable and capable of implementation in existing optical networks with 10 Gigabit/second (Gb/s) wavelengths. This trial also highlights how next-generation technology can address the emerging bandwidth needs of network providers and their users as advanced Internet-based applications continue to proliferate.
The system successfully transmitted a 100 GbE signal from Tampa, Florida to Houston, Texas, and back again, over ten 10 Gb/s channels through the Level 3 network. This is the first time a 100 GbE signal has been successfully transmitted through a live production network. The 100 GbE system will be on display from November 14th to the 16th at the Infinera booth (Booth no. 1157) at the SC06 International Conference in Tampa. The system will be transmitting a 100 GbE signal to the Internet2 booth (Booth no. 1451) during the show.
"This successful demonstration shows that this concept of 100 GbE over 10x10 Gb/s DWDM works and provides a near future implementation path," said Dr. Daryl Inniss, vice president of Ovum-RHK's Communication Components research.
"100 Gigabit Ethernet will be a critical technology to accommodate bandwidth growth, and this demonstration shows that we have the capability to implement this as a super-lambda service over today's networks," said Infinera co-founder and CTO Drew Perkins. "The Infinera DTN, which is the only DWDM system that supports 100 Gb/s on a line card, is capable today of handling 100 GbE services simply and cost-effectively."
"The research and education community continues to be the key driver for the development of extreme bandwidth services like 100 GbE," said Steve Cotter, Internet2's director of network services. "We are very interested in investigating this breakthrough technology, in collaboration with our network partners, to ensure that our network not only keeps pace but also anticipates the future demands of our members as they pursue increasingly bandwidth-intensive applications, from telemedicine to high-energy physics to high-performance Grid computing, among many others."
"This new approach to providing 100 Gig Ethernet service over long distances enables LAN Ethernet protocols in the WAN environment," said Jack Waters, CTO of Level 3. "Compared to other methods that have been demonstrated, this is a practical, economical solution that operates over the wide area using existing DWDM technologies. We're pleased to have been involved with developing and testing this solution, and will be watching closely as it is commercialized."
The largest IP backbones are currently using multiple 10 Gb/s links between core sites, and will soon demand 100 Gb/s connections to increase their capacity to keep up with fast-growing bandwidth demand. Many service providers prefer to support 100 Gigabit Ethernet links using their current transport network infrastructures. This demonstration shows that today's 10 Gb/s transport networks can support 100 GbE services. The system developed and displayed this week relies on a single-chip 100 GbE network interface that implements a lane alignment and packet resequencing scheme to bond 10 parallel 10 Gb/s channels into one logical flow while maintaining packet ordering at the receiver. This eliminates the performance issues that can arise with the use of the existing link aggregation techniques for combining multiple data channels. Services that combine multiple wavelengths to offer a single service are referred to as super-lambda services.
Finisar provided the optical transceivers for this demonstration, Infinera provided the DWDM system and project management, Internet2 was involved in developing the methodology and supporting the demonstration, Level 3 Communications provided the ten 10 Gb/s channels from Tampa to Houston, and UCSC designed and implemented the network interface including the packet resequencing scheme.
Video: A High-Speed Application
The research and education community is a leader in creating very large flows on the Internet, with some research institutions planning on flows in multiple hundreds of gigabits/second or even terabits/second. In a related demonstration at the Internet2 booth on the SC06 showfloor, Internet2 and Infinera will also showcase an advanced two-way videoconferencing application. Reliable, two-way video technology is quickly becoming a critical and necessary component of many important research and education initiatives including those in telemedicine, seismology and astronomy. 100 GbE technology would enable more than 3000 DVTS (Digital Video Transport System) or more than 60 uncompressed High-definition TV (HDTV) video applications to operate simultaneously on a single interface.
About the Technical Demonstration
The demonstration encodes a 100 GbE signal into ten 10 Gb/s streams using an Infinera-proposed specification for 100 GbE across multiple links. A single Xilinx FPGA implements this packet numbering scheme and electrically transmits all ten signals to ten of Finisar's 10 Gb/s XFP optical transceivers which in turn convert the signals to optics. These signals are then transmitted to an Infinera DTN DWDM system. For the long-distance demonstration, conducted last week, the 100 GbE signal was then handed off to Infinera systems within the Level 3 network where it was transmitted across the Level 3 network to Houston and back. This pre-standard specification for 100 GbE guarantees the ordering of the packets and quality of the signal across 10 Gb/s wavelengths and demonstrates that it is possible for carriers to offer 100 GbE services across today's 10 Gb/s infrastructure.
The IEEE Higher Speed Study Group (HSSG) recently began working on specifications for higher speed Ethernet. The partners in this demonstration are actively supporting these efforts. The pre-standard specification used in this demonstration was jointly developed by Infinera and a UCSC team including Professor of Computer Engineering Anujan Varma and his Ph.D. student Arvinderpal S. Wander.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.