October 11, 2011
URBANA-CHAMPAIGN, Ill., Oct. 10 -- The Extreme Science and Engineering Discovery Environment (XSEDE), the National Science Foundation cyberinfrastructure project that replaces and expands on the NSF TeraGrid, has transitioned its network backbone infrastructure to use the FrameNet service of National LambdaRail (NLR). This change, a switch from older technology used by TeraGrid, provides a more flexible infrastructure and saves substantial costs for connecting XSEDE service providers.
"We've eliminated the need for the TeraGrid-owned capital equipment," said Linda Winkler, who leads the XSEDE networking group, "making a more direct interconnect between XSEDE sites. This will save hundreds of thousands of dollars in equipment cost, as well as tens of thousands of dollars in annual maintenance."
XSEDE has transitioned to the NLR FrameNet service, which is part of the nationwide optical-fiber infrastructure of NLR, a consortium of more than 280 U.S. universities and private and government laboratories that provides high-performance network services for research and education. XSEDE replaced SONET/OC192 circuits with 10-GigE (10 gigabits per second Ethernet) connectivity to FrameNet for all core XSEDE service provider sites. XSEDE has a dedicated 10-GigE circuit from NLR between backbone nodes in Chicago and Denver. As with the predecessor TeraGrid service, all XSEDE participants have full site-to-site connectivity —including broadcast capability — between all the service provider sites.
The XSEDE networking group, led by Winkler (of Argonne National Laboratory in Chicago), spent the past few months switching the backbone infrastructure of the TeraGrid to the newer, more cost-effective technology. Each of the core XSEDE sites did the switch at a separate time to minimize impact to users. "As far as I know," said Winkler, "nobody noticed that we did the change, and in my estimation that's a job well done."
The changed infrastructure provides flexibility to easily extend the XSEDE network as additional sites are included. "FrameNet has the advantages of a private network along with a rich set of connectivity options," added Winkler. "The footprint of FrameNet nodes across the country makes it feasible to extend the XSEDE network backbone to other cities and to add capacity to existing sites."
"National LambdaRail is pleased to provide high-performance networking connectivity for the XSEDE partners and the National Science Foundation," said Peter O'Neil, NLR vice president of research. "NLR was founded with the goal of supporting data-intensive computational science and research, and now, with long-term financial support in place, we are re-committed to this mission."
In coming months, XSEDE networking staff will implement performance monitoring services that will make it easier for researchers to obtain optimum network performance. In addition, networking staff will research alternative connection strategies, including the ability to take advantage of dynamic circuits on demand.
The eight core XSEDE service-provider sites directly affected by the network transition are: Indiana University, National Center for Supercomputing Applications, National Institute for Computational Sciences, National Center for Atmospheric Research, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, and Texas Advanced Computing Center.
More about XSEDE: https://xsede.org.
More about NLR: http://www.nlr.net.
XSEDE, the most advanced, powerful, and robust collection of integrated digital resources and services in the world, is a single virtual system that scientists can use to interactively share computing resources, data, and expertise. The five-year, $121 million project is supported by the National Science Foundation, and it replaces and expands on the NSF TeraGrid project.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
he study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.