March 06, 2013
SAN JOSE, Calif., March 6 — LSI Corporation today announced that the LSI Nytro WarpDrive family of application acceleration PCIe flash cards has been validated for use with NetApp Flash Accel software. Together, LSI and NetApp are offering complete server flash caching solutions that speed application performance by converting server-based flash into "hot" data cache for critical business applications. LSI Nytro WarpDrive cards are among the first PCIe flash devices to be fully tested and qualified with NetApp's intelligent server caching software.
"Flash memory adoption in the enterprise is a powerful complement to hard-disk-based network storage," said Tim Russell, vice president, Data Lifecycle Ecosystem Group, NetApp. "Deploying flash as a high-speed cache in the server is a simple and cost-effective way to significantly reduce latency and I/O bottlenecks, while providing enterprise-level data protection and manageability for the entire infrastructure. Working with our server cache partners, we're able to offer customers a complete end-to-end, high-speed solution."
LSI and NetApp server caching solutions deliver:
LSI Nytro WarpDrive cards deployed in conjunction with Flash Accel software intelligently place the most frequently accessed or "hot" data on ultra-low latency, high-performance PCIe flash storage. Test results have shown a reduction in application and server latency by up to 90 percent while increasing throughput by up to 80 percent. By allowing infrequently accessed data to remain on HDD storage, organizations can deploy an economical mix of flash and hard-disk storage, optimizing both cost per IOPS and cost per gigabyte of storage capacity.
The combined solution helps to reduce datacenter footprint and overall IT costs by delivering the equivalent I/O performance of hundreds of hard drives, while using significantly less power, cooling and physical space. Storage efficiency is also improved by minimizing the number of input/output operations between servers and back-end storage systems, which frees up shared storage resources to handle additional workloads. In addition, the Nytro WarpDrive card's advanced "off-loaded" multiprocessor architecture uses up to four times less CPU and memory resources than competing solutions, providing high performance over the life of the product and freeing up these costly resources for other critical applications and workloads.
"LSI Nytro WarpDrive cards help datacenter managers contend with massive data growth by increasing the speed and responsiveness of critical applications," said Gary Smerdon , senior vice president and general manager, Accelerated Solutions Division, LSI. "The combination of Nytro WarpDrive cards and Flash Accel software allows for an optimized use of flash while extending its significant performance and TCO benefits to any server connected to NetApp storage."
LSI Nytro WarpDrive cards are part of LSI's comprehensive Nytro portfolio of PCIe flash adapters, including the Nytro XD and Nytro MegaRAID product families. Nytro WarpDrive cards range in capacity from 200GB to 1.6TB of MLC or SLC flash memory and use LSI SandForce flash storage processors to deliver award-winning performance, reliability and energy efficiency. Using industry-standard LSI drivers, featuring extensive in-box operating system and management support, Nytro WarpDrive cards are designed for simple, plug-and-play integration into today's low-profile, high-performance system chassis.
LSI Nytro WarpDrive application acceleration cards are currently available from select OEM customers and LSI's worldwide network of distributors, integrators and VARs. NetApp offers Flash Accel software as a free download for NetApp customers.
LSI Corporation designs semiconductors and software that accelerate storage and networking in datacenters, mobile networks and client computing. Our technology is the intelligence critical to enhanced application performance, and is applied in solutions created in collaboration with our partners.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.