March 07, 2013
WASHINGTON, March 7 — Services like Google Maps use algorithms to determine the fastest route from point A to point B – even factoring in real-time traffic information as you travel to redirect you if, for example, a parade is blocking part of your route. Now, a team of researchers from Spain and Japan have achieved this kind of traffic control for the connections in optical networks by using a new dynamic network management system – and it does Google Maps one better. If necessary, the flexible-grid system can also redirect the traffic-congesting parade to another street (by re-arranging one or more existing connections), so you (a single new connection) wouldn't have to go out of your way to avoid gridlock.
Ramon Casellas, a research associate at the Catalonia Technological Center of Telecommunications (CTTC) near Barcelona, will describe the system developed by his team and colleagues at KDDI R&D Labs in Japan at the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) March 17-21 in Anaheim, Calif. The research represents one of many OFC/NFOEC talks on future network capabilities made possible by software-defined networking, a popular topic at this year's event.
This particular system design combines two elements: an OpenFlow controller and a so-called "stateful" path computation element (PCE). An OpenFlow controller uses a protocol that allows the behavior of a network device – regardless of its manufacturer – to be remotely configured and, Casellas says, "by extension, provides a way to operate a network using a logically centralized element that can see the network as a whole." This enables packets of data to navigate the path of switches on a network much more efficiently than with traditional routing protocols, as if there were multiple, but coordinated remote traffic controllers helping to guide the network.
A PCE, in simple terms, is a dedicated computer that finds network routes between endpoints. "The functions of a PCE are conceptually similar to Google Maps or GPS navigation systems," Casellas says. A stateful PCE, he says, is smarter because it keeps track of and considers current connections to improve and dynamically correct the path computations for all of the connections in the network. Because the existing connections are stored in an internal database, advanced algorithms can use information about them to enhance network speed and efficiency. They do this by improving the optimization of the active connections as a whole instead of individually.
"The underlying idea," Casellas explains, "is that having extra information is helpful to improve the performance of the path computation, and thus the network. An active, stateful PCE also can affect the status of the active connections. For example, an active, stateful PCE is able to re-arrange active connections to allocate new ones."
Essentially, the system knows every connection on a network and what it is doing at any given time, with the ability to reroute those connections midstream based on new connections coming in to the network. Casellas and his colleagues successfully tested their system by using it to dynamically control the optical spectrum in the fibers in a flexi-grid optical network. In such networks, he says, the intrinsic constraints of the optical technology – for example, caused by physical defects in the network – justify the deployment of PCEs.
"Combining a stateful PCE with OpenFlow provides an efficient solution for operating transport networks," says Casellas. "An OpenFlow controller and a stateful PCE have several functions in common but also complement each other, and it makes sense to integrate them. This allows a return on investment and reduces operational expenses and time-to-market."
Casellas' presentation at OFC/NFOEC, titled "An Integrated Stateful PCE/OpenFlow controller for the Control and Management of Flexi-Grid Optical Networks," will take place Wed., March 20 at 3:45 p.m. in the Anaheim Convention Center.
For more than 35 years, the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference (OFC/NFOEC) has been the premier destination for converging breakthrough research and innovation in telecommunications, optical networking, fiber optics and, recently, datacom and computing. Consistently ranked in the top 200 tradeshows in the United States, and named one of the Fastest Growing Trade Shows in 2012 by TSNN, OFC/NFOEC unites service providers, systems companies, enterprise customers, IT businesses, and component manufacturers, with researchers, engineers, and development teams from around the world. OFC/NFOEC includes dynamic business programming, an exposition of more than 550 companies, and cutting-edge peer-reviewed research that, combined, showcase the trends and pulse of the entire optical communications industry.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.