January 31, 2013
HOPKINTON, Mass., Jan. 31 – EMC today announced support for 4 terabyte drives for EMC Isilon's industry-leading scale-out NAS solutions. With this enhancement to its X-Series and NL-Series products, EMC Isilon further extends the inherent benefits of its scale-out NAS archiving solutions — ease-of-use, highly scalable capacity and performance, auto-management and self-healing — by delivering capacity of up to 20 petabytes in a single volume, providing 33% more capacity and utilizing 30% less power per rack.
Enterprises across a wide spectrum of vertical markets are seeking more efficient and cost-effective ways of building and managing large, rapidly-growing information repositories. Rapid growth of file-based data has increased the need for highly-scalable and efficient archive storage solutions that meet business, legal and regulatory compliance requirements in markets such as government, healthcare and media & entertainment. With these drivers top of mind across industries, the need for efficient and cost-effective disc-based archive solutions has never been more important.
With EMC Isilon, archival information benefits from the robustness of the OneFS operating environment's proven capabilities for protecting and optimizing the flow of information within an organization. Enterprises capture these benefits without sacrificing application performance or the specialized data protection required for long-term archive data retention via archive applications from EMC or other vendors.
EMC Isilon also offers increased resilience at scale with faster drive rebuild times, superior data protection and industry-leading efficiency. The power of the Isilon scale-out cluster rebuilds active and archive data in case of a drive failure much faster than traditional systems — less than one day, compared to multiple days or even weeks with most traditional systems. Isilon's N+4 data protection through FlexProtect file striping helps customers reduce the risk of data loss and improve overall availability even as the cluster grows. As data archives get larger, the significance of these factors increases proportionally while EMC Isilon effortlessly scales from terabytes to petabytes.
New support for 4 terabyte drives enables more capacity in the same footprint while providing the standard Isilon utilization rates of more than 80%. For example, per rack capacity is now 1440TB with NL400 nodes, a 360TB increase in the same footprint. With 4 terabyte configurations, customers need fewer nodes to meet the same total capacity goals. Now, only seven NL400 nodes with 4 terabyte drives are needed as opposed to ten NL400 nodes with 3 terabyte drives to achieve the same raw one petabyte of capacity.
EMC Isilon X Series and NL Series products with 4 terabyte drives are currently available worldwide.
EMC Corporation is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Fundamental to this transformation is cloud computing. Through innovative products and services, EMC accelerates the journey to cloud computing, helping IT departments to store, manage, protect and analyze their most valuable asset — information — in a more agile, trusted and cost-efficient way.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.