August 14, 2006
America Inc. (FMA) and Fujitsu Laboratories of America Inc. (FLA) introduced a 20-port, 10 Gbps Ethernet
(10 GbE) switch IC, the MB8AA3020, designed to meet the performance
demands of high-density backplane switching for advanced TCA (ATCA) and
micro TCA, blade server and data center applications.
The new 20-port 10 GbE switch chip delivers 400+Gbps, non-blocking,
aggregate switching bandwidth through 3Mbytes of proprietary,
multi-stream shared buffer memory, with on-chip 10 Gbps serial SerDes.
"With the introduction of 10 Gbps connectivity for blade servers, the
increasing need for high-speed aggregation in the enterprise core, and
the proliferation of Ethernet into service provider networks, 10 Gbps
Ethernet is now becoming an important market. It will account for a
significant share of Ethernet switching equipment revenues by 2010,"
said Peter Middleton, principal research analyst with Gartner Inc.
The chip has been designed to provide high quality of service
for data center and carrier networks through state-of-art
virtualization, backward congestion management notification (BCN) and
priority PAUSE features. An on-chip micro-engine drives the
simple-to-use management interface. The IC, which has the smallest
footprint in its class, also supports the CX4 interface and adaptive
equalization with on-chip SerDes. The new switch complements Fujitsu's
family of 12-port, 10 GbE devices, which have been widely deployed in
high-performance servers worldwide.
"The bladed equipment market's sharp turn toward ASSPs to develop
standards-based blade server and ATCA architectures in the past two
years translates into solid demand for a chip with these features,"
stated Jag Bolaria, senior analyst with the Linley Group. "Fujitsu's
20-port 10 Gbps Ethernet switch chip brings powerful congestion
management and scalability to the makers of blade servers and access
Fujitsu's new switch chip embeds 20 high-bandwidth, full-duplex
10 Gbps ports in a single, integrated 1,156-pin FCBGA package. The
10 Gbps serial ports provide a twofold, industry-leading advantage.
First, they eliminate the need for expensive off-chip XAUI to XFI
SerDes, allowing direct connections to optical XFP modules on any port
and helping to significantly reduce cost, latency, power consumption,
and board space. Second, the ports reduce the routing overhead inherent
in running 4 x 3.125G lanes.
"The high-density MB8AA3020 chip is in a class by itself," stated
FMA's senior manager Asif Hazarika, who is heading FMA's Networking Solutions Business Group. "This
next-generation switch chip sets new standards of delivery for
enterprise, data-center and carrier-switching-systems suppliers. It
provides optimal switching capacity, low latency, a wide range of
switching priorities, superior security, and previously unavailable
congestion-management capabilities. With the 10 Gbps serial interface,
the new chip will cost-effectively meet the requirements for backplane
switching in the ATCA, micro-TCA and blade-server environments."
"With current trends in traffic growth and complex multimedia
services, network and data-center operators must have a quantum leap in
layer-2 switching system performance," added Akira Hattori, FLA's senior vice president of Advanced
Interconnect Technology. "These networks demand 400+ Gbps of
non-blocking switching capacity and double or triple the memory
capacity of the best available alternative. They need very low latency
and the flexibility of 10G serial, XAUI or CX4 interface capability on
all ports. The MB8AA3020 chip fulfills all of these key switching
requirements at the lowest power consumption, highest density and
smallest footprint in the industry."
The new chip offers 400 Gbps of non-blocking, aggregate switching
capacity in both cut-through and store-and-forward modes of operation.
With a 300ns, pin-to-pin switching latency including SerDes in
cut-through mode, the chip is ideal for high-density, latency-sensitive
applications. Some competing devices claim lower latencies but do not
count the latency (an additional 300ns delay in round-trip) that would
be encountered through the off-chip SerDes between the XAUI and XFP
An on-chip micro-engine embedded in the MB8AA3020 and the 2 x
10/100/1GbE management interface provide the high-level API for switch
software, reducing the amount of software development required on the
Key to advanced service-level delivery, the 20-port chip provides
eight priority classifications per port, enabling priority switching
based on DiffServ, MAC address, VLANs, extended VLANs and ports. In
addition, the MB8AA3020 provides data center and carrier-grade Ethernet
features like priority PAUSE, backward congestion notification and
early-packet-drop capabilities for congestion management. These
features, in combination with the industry's largest buffer memory,
allow the MB8AA3020 to handle both best-effort and
guaranteed-high-availability customer traffic in a single chip.
In addition to the standard 4K VLAN and QinQ capabilities, the new
chip provides 64 extended VLAN addresses, which can be used to
logically partition operator networks without QinQ. The layer-2
capabilities are further enriched by IGMP and MLD snooping, as well as
by DiffServ for both IPv4 and IPv6.
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.