February 21, 2013
SAN FRANCISCO, Calif., Feb. 21 – Riverbed Technology, the application performance company, today announced Whitewater Operating System (WWOS) version 2.1 with support for Amazon Glacier storage and Google Cloud storage. WWOS 2.1 increases operational cost savings and high data durability from cloud storage services, improving disaster recovery readiness. In addition, Riverbed introduced larger virtual Whitewater appliances that allow customers to support larger data sets, improve disaster recovery capabilities, and manage multiple Whitewater appliances from a single window with a management console. These enhancements to the Whitewater cloud storage product family help enterprises use cloud storage to meet critical backup requirements, modernize data management strategies, and overcome challenges created by data growth.
"Once created, most unstructured data is rarely accessed after 30-90 days. Leveraging the cloud for storing these data sets makes a lot of sense, particularly given the attractive prices of storage services designed for long-term such as Amazon Glacier," said Dan Iacono, research director from IDC's storage practice. "The ability of cloud storage devices to cache locally and provide access to recent data provides real benefits from an operational cost perspective to avoid unnecessary transfer costs from the cloud."
Cloud Storage Ecosystem Expansion
Riverbed is offering customers choice and flexibility for data protection by adding Amazon Glacier and Google Cloud storage to its Whitewater cloud storage ecosystem. Now, Whitewater customers using Amazon Glacier cloud storage have immediate access to recent backup data while enjoying pricing from Amazon as low as one cent per gigabyte per month -- approximately eight times cheaper than other currently available cloud storage offerings.
In addition, the extremely high data durability offered by Amazon cloud storage services and the ability to access the data from any location with an Internet connection greatly improves an organization's disaster recovery (DR) readiness.
Larger Virtual Whitewater Appliances
With the introduction of the larger virtual Whitewater appliances, Riverbed allows customers preferring virtual appliances to protect larger data sets as well as simplify disaster recovery. The new virtual Whitewater appliances support local cache sizes of four or eight terabytes and integrate seamlessly with leading data protection applications as well as all popular cloud storage services. To streamline management for enterprise wide deployments, WWOS 2.1 includes new management capabilities that enable monitoring and administration of all Whitewater devices from a single console with one-click drill down into any appliance.
"We have been successfully using Riverbed Whitewater appliances for backup with Amazon S3 in our facilities in Germany, Switzerland, and the U.S. since June 2012," said Drew Bartow, senior information technology engineer at Tipper Tie. "We were eager to test the Whitewater 3010 appliance with Amazon Glacier and the total time to configure and start moving data to Glacier was just 24 minutes. With Glacier and Whitewater we could potentially save considerably on backup storage costs."
"The features in WWOS 2.1 and the larger virtual appliances drastically change the economics of data protection," said Ray Villeneuve, vice president corporate development, at Riverbed. "With our advanced, in-line deduplication and optimization technologies, Whitewater shrinks data stored in the cloud by up to 30 times on average -- for example, Whitewater customers can now store up to 100 terabytes of backup data that is not regularly accessed in Amazon Glacier for as little as $2,500.00 per year. The operational cost savings and high data durability from cloud storage services improve disaster recovery readiness and will continue to rapidly accelerate the movement from tape-based and replicated disk systems to cloud storage."
More than 22,000 organizations worldwide depend on Riverbed to understand, optimize and consolidate their IT infrastructure, through solutions that overcome performance issues caused by distance, distributed computing, and ever increasing amounts of data. As IT organizations embark on strategic initiatives to virtualize, consolidate and migrate workloads into cloud environments, users are moved farther from their data. Slow applications, slow file transfers and inefficient websites can negatively impact the performance and success of these initiatives. Riverbed transforms IT performance by providing solutions spanning WAN optimization, edge-VSI, application-aware network performance management, application performance management, application delivery controllers, web content optimization (WCO), and cloud data protection. By providing the broadest portfolio of performance solutions that deliver anywhere, any-application optimization, Riverbed enables organizations to increase productivity and efficiency, while enhancing business resilience and controlling costs.
Riverbed delivers application performance for the globally connected enterprise. With Riverbed, enterprises can successfully and intelligently implement strategic initiatives such as virtualization, consolidation, cloud computing, and disaster recovery without fear of compromising performance. By giving enterprises the platform they need to understand, optimize and consolidate their IT, Riverbed helps enterprises to build a fast, fluid and dynamic IT architecture that aligns with the business needs of the organization.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.