December 12, 2005
has made network performance history as the high-speed network provider
for an experiment that set a world record in transatlantic
visualization. Scientists from the Netherlands established a world
record by generating the largest transatlantic real-time data stream to
date for ultra-high-resolution visualization.
The record was set by SARA, the academic computer center in
Amsterdam, which displayed a visualization stream of 19.5 Gbps between
NetherLight, the GLIF Open Lightpath Exchange (GOLE) in Amsterdam, and
San Diego. This new benchmark for a real-time transatlantic
data stream was established using high-speed multiple 10-Gbps
wavelengths supplied by Global Crossing to the Dutch research network
During the experiment, network usage peaked at 19.5 Gbps, with a
sustained rate of 18 Gbps -- a world record for bandwidth usage by one
single application showing actual scientific content. The experiment
was conducted at the iGrid 2005 conference in San Diego where the
display was located. The conference included workshops and real-time
demonstrations of research innovations in LambdaGrid infrastructure in
support of advanced science applications. Global Crossing is one of the
major providers of lambdas to the Global Lambda Integrated Facility
"This was a unique event which set out to test the limits of very
high speed wide area networking to support data-intensive
applications," said Paul Wielinga, SARA's business unit manager for
high-performance networking. "The success of this experiment depended
heavily on Global Crossing's high-speed transatlantic connections, as
well as being able to source a virtual graphic card and a display that
could handle a visualization with a resolution of 100 million pixels."
The high bandwidth usage was required to refresh the large "tiled"
screen 20 times per second in order to achieve high levels of
resolution. The output was viewed on a display of 55 screens of the
Electronic Visualization Laboratory of the University of Illinois,
resulting in a total resolution of 17,600 x 6,000 pixels. SARA will be
taking this experiment forward in February next year when once again
they will use Global Crossing as the backbone for running a
high-resolution visualization and videoconferencing simultaneously.
John Legere, CEO of Global Crossing, said: "We are extremely proud
of the proven capabilities of our global fiber network to support
research experiments that are pushing forward the boundaries of
international computer grids. Setting records of this nature requires a
level of network performance and reliability that we consistently
deliver to the global research and education community."
For this latest record-breaking experiment, the infrastructure
between Amsterdam and San Diego consisted of a 20 Gbps connection set
up in close cooperation with SURFnet via the GOLE's NetherLight in
Amsterdam and StarLight in Chicago. The 2-D and 3-D data objects were
rendered live on a powerful visualization cluster in Amsterdam and
transported as a pixel stream via optical lambda networks to San Diego.
The availability of lambda networks opens the way for separation of
the visualization "engine," or high-end graphical computer, from the
high- resolution display. It enables real-time visualizations running
at close to 20 Gbps over transatlantic wide area networks. This has
other important implications, including allowing visualizations from a
central facility to be distributed to distant locations without the
need for data to leave a protected, enclosed environment. The
visualization of a medical procedure, for example, can be distributed
as an intensive pixel stream without sensitive information leaving the
hospital. This allows researches and scientists to view large data sets
as a real-time image rather than have to store the data locally in
order to be able to view it.
Wielinga commented: "This experiment is just the beginning of a
concept, and we're considering other applications in the areas of
astro- and high-energy physics that could use this networking model.
None of this would be possible without access to dedicated high
bandwidth capacity. We are pushing the limits of technology and we are
now studying improved network protocols to overcome latency over long
distances and to use even higher bandwidth more efficiently."
Global Crossing's collaboration with the research community in the
pursuit of new standards for high-performance networking goes back to
2002 when Global Crossing supported SURFnet and its international
partners to set a new intercontinental Internet2 land speed record. The
record at that time was set by transferring the equivalent of the
contents of an entire compact disc across more than 7,608 network miles
between Alaska and Amsterdam in 13 seconds.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.