March 03, 2011
SUNNYVALE, CA., March 3, 2011 -- Juniper Networks (NYSE: JNPR) unveiled the industry's first Converged Supercore switch with unprecedented scalability of up to 3800 Terabits. By merging the core packet and optical network layers, Juniper has dramatically reduced complexity in the service provider network while improving the economics of supporting the increasingly unpredictable traffic patterns driven by the rapid growth of mobile, video and cloud computing applications.
Powered by the new purpose-built Junos Express chipset and the proven reliability of the Junos operating system, the Juniper Networks PTX Series Packet Transport Switch consolidates optical transport with the efficiency and scalability of packet switching to provide the engine of the new Converged Supercore, resulting in cost savings of up to 65 percent over legacy architectures.
Today's culture has created an "any service, anytime, anywhere" connection model driven by the proliferation of mobile, video, cloud computing and other data-intensive services that is producing a steady increase in network traffic of over 40 percent a year, with traffic increasingly characterized by unpredictable volume and patterns. Juniper's Converged Supercore solution is designed to enhance service provider economics while delivering on the subscriber expectations of the new always-connected life.
"As the vast majority of network traffic is packet-based, service providers need to rethink the architecture of the core transport network to scale with the demands of the IP world," said Stefan Dyckerhoff, executive vice president and general manager of Juniper's Platform Systems Group. "Juniper has once again delivered an industry first. The Converged Supercore will collapse the multilayered networks service providers are running today, giving them more scalability and simplicity."
The PTX Series is purpose-built to deliver the speed, scale and reduced cost of network ownership required to economically build the core infrastructure to support profitable service delivery. Compared to competitive core platforms, the PTX Series achieves:
* 4x the speed at 480 Gigabits per second (Gbps) per slot, expandable to 2 Terabits per second (Tbps) per slot
* 5x the packet processing capability per slot
* 1/3 the total power consumption at 1W/Gbps
* 10x the system scale, supporting up to 3800 Tbps
"Service providers are quickly recognizing that simply throwing more bandwidth and hardware at their networks to address traffic problems is nothing more than a very costly and unsustainable band-aid," said Ray Mota, managing partner of industry analyst firm ACG Research. "By integrating the flexibility of MPLS switching with optical transport, Juniper has again set the bar higher and has blazed a new path where the network can flexibly and economically accommodate practically any traffic demand."
Single Junos Operating System Extends Network Investments
The PTX Series, in combination with the recently introduced QFabric™ solution, T4000 Core Router and the MX Series 3D Universal Edge Router, deliver a unique architectural topology for the service provider network that promotes long-term investment protection by flexibly scaling total capacity to meet changing fluctuations in traffic demand. With Junos managing both the packet and optical domains, the first Converged Supercore switch extends Juniper's history of innovation to transport networks.
Live Demonstrations at OFC/NFOEC
Juniper will demonstrate the PTX Series in booth 2539 at OFC/NFOEC, the world's most comprehensive conference for optical communications, at the Los Angeles Convention Center, March 8-10.
The PTX Series includes the 8 Tbps PTX5000 and the 16 Tbps PTX9000, each supporting an initial 480 Gbps per slot throughput, and a suite of 10/40/100GE short-reach and ultra long-haul DWDM interfaces. The PTX Series will be available for beta trials in the third quarter of 2011.
About Juniper Networks
Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. Additional information can be found at Juniper Networks (www.juniper.net).
Source: Juniper Networks
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.