November 28, 2005
For the first time ever in a real-world environment, Pacific Northwest Gigapop (PNWGP) and its strategic partners brought together more than one-half terabit per second (500 Gb per second) of bandwidth in deploying SCinet, the high performance network built to support Supercomputing 2005 in Seattle. The network was provisioned through multiple dark fiber strands brought by the University of Washington from the convention center to major telecommunications facilities in the city.
DWDM gear from Ciena, Cisco and Nortel were used to
provision more than 50 10 Gbps circuits and a native 40 Gbps circuit.
These circuits were then interconnected to numerous high-bandwidth
national backbones, including National LambdaRail, CANARIE, Internet2's
Abilene Network and UltraScience Net. International networks worked
with these various North American facilities to reach the Seattle
venue. In particular, Pacific Rim networks in Japan, Korea, Taiwan and
Australia were able to utilize the Pacific Wave distributed peering
exchange facility, a joint project between PNWGP and CENIC.
"As a direct result of many strategic investments by the University of Washington and the Pacific Northwest Gigapop, Seattle is one of the few places in the world where SC05 could benefit from an abundance of first-rate networking resources including metropolitan fiber, carrier-grade telecommunications facilities, a world-class engineering team, and an ever growing concentration of national and international networks," said Steve Corbato, director of network initiatives for Internet2.
"This staggering amount of bandwidth," he continued, "was deployed seamlessly and provides a truly impressive demonstration of the rapidly evolving suite of network capabilities in support of leading-edge computational science."
Among the many events relying on this bandwidth were massive storage and dataretrieval tools, the Internet2 Land Speed Record attempts (IPv4 and IPv6), data Grids, multipoint real-time, high-definition video from points around the world, super highdefinition video and massive 3-D imaging.
Professor Larry Smarr, director of the California Institute for Telecommunications and Information Technology (Calit2), and principal investigator of the National Science Foundation's OptIPuter project, offered this observation: "The Terabit Era has arrived. This unprecedented achievement of PNWGP and SC05 demonstrates that the United States needs to broaden its strategic technology leadership agenda from a focus on faster individual supercomputers to supernetwork-connected resources on a global scale."
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.