August 08, 2011
CHICAGO, Aug. 8 -- The Computation Institute at the University of Chicago and Argonne National Laboratory announced this week that Globus Online, the service for secure, reliable data movement, signed up over 1,000 users in its first six months of service.
At present, the service counts nearly 1,400 registered users, mostly high-performance computing (HPC) system users and other researchers who need to transfer large amounts of data among systems, who have moved over 30 million files equaling nearly 400 terabytes of data.
"We're thrilled to see the research community embrace Globus Online so quickly," said Ian Foster, Computation Institute director. "With exploding data volumes and an increasingly collaborative work environment, researchers everywhere will soon require the capabilities traditionally reserved for big-science projects. We can't require every lab to fill up with computers loaded with sophisticated software, and every researcher to become an IT specialist. That's where Globus Online comes in."
"Globus Online is the most beneficial grid technology I have even seen," said Steven Gottlieb, Distinguished Professor, Indiana University. "I moved 100 7.3 GB files in about 1.5 hours - the same transfer would have taken over 3 days with scp. Globus Online has also made a big difference in convenience - it's much easier to move the files we need where we need them."
"I routinely have to move hundreds of gigabytes of data, and Globus Online makes it easy," said Jeff Porter, a nuclear scientist and frequent Globus Online user from Lawrence Berkeley National Lab. "There's almost nothing to it, other than specifying your file source and destination."
Globus Online is software-as-a-service (SaaS) that simplifies data movement -- whether between supercomputing facilities or from a facility to a local server or personal computer -- without requiring custom end-to-end systems. Users can fire-and-forget their request and Globus Online manages the entire operation: monitoring performance, retrying failed transfers, recovering from faults automatically whenever possible, and reporting status.
Dozens of organizations like NERSC, The University of California Grid (UCGrid), ESnet and many more are recommending Globus Online to their users as a preferred file transfer tool.
In addition, Globus Online is a key component of the environment for XSEDE (https://www.xsede.org/), the National Science Foundation project replacing TeraGrid to provide computing resources, data, and expertise for over 10,000 scientists.
"I've used Globus Online to move terabytes of data between TeraGrid sites," said Greg Daues, research programmer at the National Center for Supercomputing Applications (NCSA). "The service is reliable and easy to use, and I look forward to continuing to use it with XSEDE. I've also used the Globus Connect feature to move files from TeraGrid sites to other machines -- this is a very useful feature which I'm sure XSEDE users will want to take advantage of."
Registrations and usage of the free service accelerated after the release of Globus Connect, a feature that makes it easy to perform 'last-mile' file transfers to and from personal laptops and other local machines, even if they are behind a firewall.
"Since Globus Connect was released in April, we've seen our weekly sign-up averages double," said Steve Tuecke, deputy director at the Computation Institute. "Not only that, but regular usage by registered researchers has grown nearly 50 percent."
"Globus Online will (positively) impact the job satisfaction and morale of our users," said Luke Van Roekel, oceanography researcher from Northland College in Wisconsin. "It's stressful and frustrating to return to a long-running download and find out there was a problem. Globus Online removes that pain and restores peace of mind regarding file transfer. With Globus Online, our researchers have one less thing to worry about!"
"File movement is just the first step in our vision for creating a long-term research data lifecycle management solution," notes Foster. "By gradually moving IT functions out of the lab, the research community can over time achieve many of the same benefits that the business world has garnered via outsourcing and cloud computing. Providing advanced data management functionality, without the mundane and costly IT overheads, will make for far more productive researchers."
For more information on Globus Online, visit http://www.globusonline.org/.
About Globus Online
Globus Online is a fast, reliable file transfer service that simplifies the process of secure data movement. Recommended by HPC Centers and user communities of all kinds, Globus Online automates the mundane (but error prone and time consuming) activity of managing file transfers, whether between supercomputing facilities or from a facility to your local server or laptop. With Globus Online, robust transfer capabilities that were previously available only on expensive, special-purpose systems are now accessible to virtually anyone with an Internet connection and a laptop. Users can fire-and-forget their request and Globus Online will manage the entire operation -- monitoring performance, retrying failed transfers, recovering from faults automatically whenever possible, and reporting status. Globus Online significantly reduces transfer time, with some users reporting movement of terabytes of data in hours. With no custom infrastructure or complex configurations required, Globus Online lets users stay focused on what's really important -- their research. To get started or find out more, visit http://www.globusonline.org.
Source: Computation Institute
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.