August 14, 2006
America Inc. (FMA) and Fujitsu Laboratories of America Inc. (FLA) introduced a 20-port, 10 Gbps Ethernet
(10 GbE) switch IC, the MB8AA3020, designed to meet the performance
demands of high-density backplane switching for advanced TCA (ATCA) and
micro TCA, blade server and data center applications.
The new 20-port 10 GbE switch chip delivers 400+Gbps, non-blocking,
aggregate switching bandwidth through 3Mbytes of proprietary,
multi-stream shared buffer memory, with on-chip 10 Gbps serial SerDes.
"With the introduction of 10 Gbps connectivity for blade servers, the
increasing need for high-speed aggregation in the enterprise core, and
the proliferation of Ethernet into service provider networks, 10 Gbps
Ethernet is now becoming an important market. It will account for a
significant share of Ethernet switching equipment revenues by 2010,"
said Peter Middleton, principal research analyst with Gartner Inc.
The chip has been designed to provide high quality of service
for data center and carrier networks through state-of-art
virtualization, backward congestion management notification (BCN) and
priority PAUSE features. An on-chip micro-engine drives the
simple-to-use management interface. The IC, which has the smallest
footprint in its class, also supports the CX4 interface and adaptive
equalization with on-chip SerDes. The new switch complements Fujitsu's
family of 12-port, 10 GbE devices, which have been widely deployed in
high-performance servers worldwide.
"The bladed equipment market's sharp turn toward ASSPs to develop
standards-based blade server and ATCA architectures in the past two
years translates into solid demand for a chip with these features,"
stated Jag Bolaria, senior analyst with the Linley Group. "Fujitsu's
20-port 10 Gbps Ethernet switch chip brings powerful congestion
management and scalability to the makers of blade servers and access
Fujitsu's new switch chip embeds 20 high-bandwidth, full-duplex
10 Gbps ports in a single, integrated 1,156-pin FCBGA package. The
10 Gbps serial ports provide a twofold, industry-leading advantage.
First, they eliminate the need for expensive off-chip XAUI to XFI
SerDes, allowing direct connections to optical XFP modules on any port
and helping to significantly reduce cost, latency, power consumption,
and board space. Second, the ports reduce the routing overhead inherent
in running 4 x 3.125G lanes.
"The high-density MB8AA3020 chip is in a class by itself," stated
FMA's senior manager Asif Hazarika, who is heading FMA's Networking Solutions Business Group. "This
next-generation switch chip sets new standards of delivery for
enterprise, data-center and carrier-switching-systems suppliers. It
provides optimal switching capacity, low latency, a wide range of
switching priorities, superior security, and previously unavailable
congestion-management capabilities. With the 10 Gbps serial interface,
the new chip will cost-effectively meet the requirements for backplane
switching in the ATCA, micro-TCA and blade-server environments."
"With current trends in traffic growth and complex multimedia
services, network and data-center operators must have a quantum leap in
layer-2 switching system performance," added Akira Hattori, FLA's senior vice president of Advanced
Interconnect Technology. "These networks demand 400+ Gbps of
non-blocking switching capacity and double or triple the memory
capacity of the best available alternative. They need very low latency
and the flexibility of 10G serial, XAUI or CX4 interface capability on
all ports. The MB8AA3020 chip fulfills all of these key switching
requirements at the lowest power consumption, highest density and
smallest footprint in the industry."
The new chip offers 400 Gbps of non-blocking, aggregate switching
capacity in both cut-through and store-and-forward modes of operation.
With a 300ns, pin-to-pin switching latency including SerDes in
cut-through mode, the chip is ideal for high-density, latency-sensitive
applications. Some competing devices claim lower latencies but do not
count the latency (an additional 300ns delay in round-trip) that would
be encountered through the off-chip SerDes between the XAUI and XFP
An on-chip micro-engine embedded in the MB8AA3020 and the 2 x
10/100/1GbE management interface provide the high-level API for switch
software, reducing the amount of software development required on the
Key to advanced service-level delivery, the 20-port chip provides
eight priority classifications per port, enabling priority switching
based on DiffServ, MAC address, VLANs, extended VLANs and ports. In
addition, the MB8AA3020 provides data center and carrier-grade Ethernet
features like priority PAUSE, backward congestion notification and
early-packet-drop capabilities for congestion management. These
features, in combination with the industry's largest buffer memory,
allow the MB8AA3020 to handle both best-effort and
guaranteed-high-availability customer traffic in a single chip.
In addition to the standard 4K VLAN and QinQ capabilities, the new
chip provides 64 extended VLAN addresses, which can be used to
logically partition operator networks without QinQ. The layer-2
capabilities are further enriched by IGMP and MLD snooping, as well as
by DiffServ for both IPv4 and IPv6.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.