September 12, 2011
I would like to take this opportunity to tell you about Xyratex. We’ve been developing data storage products for over 25 years and deliver enterprise storage solutions to the leading storage and server companies including, Dell, EMC, HP, IBM and Network Appliance. Xyratex has been ranked as the #1 supplier of OEM disk storage systems for the past 3 years in a row, supplying 19% of all worldwide enterprise OEM data storage (over 3,000 Petabytes shipped last year). With approximately 2,000 employees based in development and manufacturing facilities throughout Europe, North America and Asia, Xyratex is well positioned to meet the needs of our customers.
Xyratex has a long history of designing, developing and manufacturing modular, scalable, high-density data storage solutions for server and storage OEMs. Our products are generally aimed at the enterprise market where the need for reliability, availability and serviceability is important to the viability of the overall solution.
We’ve been successfully executing on our strategy to complement our storage systems business by providing our OEMs with a portfolio of data storage solutions to address the needs of high growth markets. The implementation of this strategy commenced nearly 4 years ago with the introduction of the OneStor™ family of Integrated Application Platforms. These platforms enable our OEM customers to integrate software and storage into a single solution. The next phase of our strategy was to provide an integrated storage solution with both hardware and software content that addressed the needs of a high growth market.
Early on we observed that the High Performance Computing (HPC) market was a dynamic opportunity at the very core of global innovation. As we engaged with customers we identified a clear need for a better data storage solution with improved performance and reliability. We discovered that the way data storage was being implemented at many HPC sites was unduly complicated in terms of initial installation, performance optimization and ongoing management. Users had to contend with days and/or weeks of tweaking to get the system up and running stably. After this initial installation period was complete, the ongoing management of the system was also complicated by the lack of coherent management tools.
The HPC data storage market provided an ideal opportunity for Xyratex to deliver innovation in terms of performance, availability and ease of management based on Xyratex proven enterprise class storage application platform technology. Xyratex made a significant investment in addressing these needs. With the acquisition of ClusterStor Inc. in mid 2010, Xyratex acquired many of the leading experts in the Lustre® community, including Peter Braam the inventor of Lustre. Lustre is recognized as the world's #1 parallel file system and is designed for I/O performance and scaling beyond the limits of traditional storage technology. With this expertise, we’ve been able to establish a truly world class clustered file system development and support organization.
Our investment did not stop there. We now have nearly 150 engineers working on our HPC program; we have developed a purpose-built high density storage platform that is optimized for both performance and availability, and we have developed a new management framework to address the complexity issues we identified in HPC. We believe that this investment, allied to our extensive core capabilities in storage technology, has enabled us to deliver an HPC data storage solution that provides a superior combination of capability, performance and ease of use - the ClusterStor™ 3000.
The ClusterStor 3000 is an integrated rack-scale Lustre storage solution, engineered for HPC environments, that provides unprecedented density of capacity and bandwidth. It uses half the footprint, while providing twice the end-to-end bandwidth, of traditional solutions. This highly integrated subsystem provides unprecedented performance, reliability and availability in the industry’s most dense storage solution at over 2 Petabytes per rack.
Key to the ClusterStor 3000’s design is its unique scale-out storage architecture which starts with the consolidation of Lustre servers, RAID controllers and disk enclosures - which are delivered as three separate components in traditional systems - into a single storage subsystem. ClusterStor’s scale-out storage architecture facilitates configurations from terabytes to tens of petabytes and from 2.5 gigabytes per second to 1 terabyte per second of actual Lustre file system throughput, delivering the best performance and capacity in the industry.
ClusterStor Manager is a management framework that is included with the ClusterStor 3000. ClusterStor Manager’s distributed and tightly integrated architecture provides a new class of management ease that has not been seen before in Lustre administration. The linear scaling approach, coupled with simplified management, allows users to begin leveraging their Lustre storage cluster in a fraction of the time of a traditional solution.
ClusterStor’s architecture delivers three interrelated benefits to the HPC environment: linear performance scalability, ease of installation and management, and enhanced storage system reliability at scale.
I’d like to invite you to learn more about Xyratex and the ClusterStor 3000 by visiting us at SC‘11 (booth 104) or our website www.xyratex.com. The ClusterStor 3000 will be available through our OEM partners in late fall.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.