March 06, 2012
IRVINE, Calif., March 6 — Netlist, Inc., a leading provider of high-performance controller-based memory subsystems, today announced that HyperCloud HCDIMM outperforms Load Reduced DIMMs (LRDIMM) when benchmarked on the latest generation Intel Xeon Processor E5-2600 motherboard configured with 384GB system memory. Benchmarks were performed by Computer Memory Test Labs (CMTL), the industry's leading independent compatibility test lab for memory module and motherboard compatibility testing. HCDIMM's Distributed Architecture results in reduced data skew and latency for superior memory bandwidth compared to LRDIMMs single memory buffer implementation.
"Our independent testing confirms that HCDIMM performance exceeded LRDIMM in the key metrics of Aggregate Memory Bandwidth by 17.5 percent, Time to Copy by 17.6 percent, and Cache Memory Bandwidth by almost 15 percent on a fully populated 24 DIMM, 384GB configured system," said Raji Tannouri, General Manager of CMTL.
"HCDIMM support for the highest capacity and speed configurations enables today's key applications such as financial trading, big data analytics, virtualization, and simulation applications such as EDA, FEA, CFD, CED to perform with optimal efficiency," said Devon Park, Vice President of Marketing for Netlist. "This enables our customers to benefit from increased productivity, resulting in additional advantages such as faster time to market and revenue, lower transaction cost processing, reduced OPEX (Operational Expenses), and reduced TCO (Total Cost of Ownership). Independent testing from CMTL is important in providing end users and OEMs with third party validation of the advantages provided by HCDIMM."
Benchmark tests were performed using SiSoftware's Sandra Lite benchmarking suite under identical test conditions with a SuperMicro(R) X9DR6-LN4+ motherboard, dual Intel Xeon E5-2650L processors running at 1.8GHz and 384GB HCDIMM and LRDIMM. Critical software applications will see faster run times resulting in increased revenue generation, quicker time to market, and reduced Operational Expenses (OPEX) through more efficient hardware, software and personnel utilization.
About Computer Memory Test Labs (CMTL)
CMTL was established in 1996 and has performed over 18,000 memory module compatibility tests, creating an industry standard for memory module and motherboard compatibility certification. Today, it has grown to become the leading independent memory compatibility test lab worldwide. CMTL provides independent compatibility testing services to the industry's leading manufacturers of computer memory, microprocessors, chipsets and motherboards. Once a product has been tested in CMTL's advanced laboratory, it is certified to be functionally compatible with the platform for which it was tested. Platforms may include desktop, workstation, blade, or enterprise level server – any device which includes memory module as part of its construction. For more information, visit www.cmtlabs.com.
Netlist, Inc. designs and manufactures high-performance, logic-based memory subsystems for server and storage applications for cloud computing. Netlist's flagship products include HyperCloud, a patented memory technology that breaks traditional memory barriers, NVvault family of products that enables data retention during power interruption, EXPRESSvault, a PCI Express backup/recovery solution for cache data protection and a robust portfolio of high performance and specialty memory subsystems including VLP (very low profile) DIMMs and Planar-X RDIMMs.
Netlist develops technology solutions for customer applications in which high-speed, high-capacity, small form factor and heat dissipation are key requirements for system memory. These customers include OEMs that design and build tower servers, rack-mounted servers, blade servers, high-performance computing clusters, engineering workstations and telecommunications equipment. Founded in 2000, Netlist is headquartered in Irvine, Calif., with manufacturing facilities in Suzhou, People's Republic of China. Learn more at www.netlist.com.
Source: Netlist, Inc.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.