September 02, 2008
FREMONT, Calif., Sept. 2 -- 3PAR
"Enterprise datacenter managers continually seek to decrease complexity and increase utilization. To this end, virtualization technologies such as thin provisioning are fast becoming 'must have' features to realize greater efficiencies in the virtualized datacenter," said Brad Nisbet, research manager of storage systems at IDC. "As a leader in bringing software-based thin provisioning to the open-systems market, 3PAR's endeavor to build thin technologies into hardware is a logical progression that will increase the efficiencies of enterprise organizations."
Thin Built In Design
The 3PAR Gen3 ASICs within the T-Class arrays feature a Thin Built In design to increase capacity utilization while maintaining high service levels. This design incorporates detection of allocated but unused capacity ("zero-detection" capability) into the 3PAR Gen3 ASIC to offer a silicon-based mechanism for fat-to-thin volume conversions. These fat-to-thin volume conversions are intended to boost capacity utilization by removing allocated but unused space from traditional volumes. 3PAR is the first in the industry to commercially ship storage systems with fat-to-thin capability designed into the hardware architecture of its arrays.
The Thin Built In architecture of the T-Class arrays was designed to preserve service levels and prevent disruption to production workloads during migration of "fat" volumes from other storage platforms to new "thin" volumes on the InServ. When fat-to-thin volume conversions take place in specialized silicon, controller CPU and memory resources are not diverted away from application workloads. This averts the negative performance impact of a software-based fat-to-thin implementation.
InServ T-Class arrays featuring the 3PAR Gen3 ASIC with the Thin Built In design are available today. 3PAR is developing additional software functionality to make fat-to-thin volume conversions, which are not currently supported in software, possible on the T-Class arrays with the next release of the 3PAR InForm Operating System.
3PAR designed the InServ family of arrays to deliver high levels of performance and consolidation simply and affordably, so that customers don't have to overprovision capacity or resort to complex administration to increase performance and improve utilization. To demonstrate the power of the new InServ T-Class, 3PAR has posted record-setting SPC-1 results in which the 3PAR InServ T800 achieved a total of 224,989.65 SPC-1 IOPS, an SPC-1 Price-Performance of $9.30/SPC-1 IOPS, and a total ASU capacity of 77,824 gigabytes(1). With these results, the InServ T-Class more than doubles the performance of the InServ S-Class to become the fastest single-system storage array as measured by SPC-1 results on file with the SPC.
The SPC (http://www.storageperformance.org/) is a vendor-neutral standards body focused on the storage industry. SPC benchmark results such as the SPC-1 are intended to provide a source of comparative storage performance information that is objective, relevant, and verifiable. SPC benchmarks are designed to be vendor and platform independent and are applicable across a broad range of storage configurations and topologies. Any vendor may sponsor and publish an SPC benchmark result provided their tested configuration satisfies the requirements of the appropriate SPC benchmark specification. The SPC-1 benchmark uses a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business-critical applications.
An IBM System p5 595 server was used to drive the SPC-1 benchmark load to the 3PAR T-Class. "The IBM System p5 595 provides exceptional scalability, making it superb to drive a high-end storage benchmark," said Scott Handy, vice president of worldwide strategy for IBM Power Systems. "The combination of IBM Power Systems with the 3PAR T-Class enables customers to meet the performance demands of their mission-critical UNIX applications."
The InServ T-Class arrays feature the only single-system storage architecture to report 224,989.65 IOPS in a published SPC-1 result, which was achieved with 83 percent capacity utilization and without complex configuration or performance tuning. According to results on file with the SPC, this makes the InServ T-Class the only modular array capable of delivering performance in excess of 224,000 IOPS right out of the box, without performance-enhancing techniques such as "short stroking," a common practice whereby vendors leave physical capacity space unaccessed in order to speed disk performance.
"With more than 1.6 million jobs and 23 million unique visitors to our site each month, performance is critical to our business, but so is simplicity and ease of use," said Ali Shahzad, storage architect at CareerBuilder.com, the largest online job site in the United States. "The 3PAR T-Class offers the kind of performance we require, right out of the box."
Innovative 3PAR InSpire Architecture Featuring the 3PAR Gen3 ASIC
3PAR's unique and tightly clustered InSpire Architecture was designed to ensure high and predictable levels of performance for all workloads -- even under failure conditions -- as well as high utilization rates for purchased resources. Central to the InSpire design is a high-bandwidth, low-latency backplane that unifies cost-effective, modular, and scalable components into a highly available and autonomically load-balanced cluster.
The 3PAR Gen3 ASIC in each 3PAR Controller Nodes acts to communicate and move data between controllers across this passive, full-mesh backplane. Each application workload is distributed and shared across all system resources in a massively parallel fashion. This approach differs substantially from other quality of service schemes based on the purchase and ongoing management of dedicated (not shared) and often underutilized system resources.
The 3PAR Gen3 ASIC was also designed to alleviate performance concerns and cut traditional array costs by allowing the InServ to deliver mixed workload support. With the InServ, transaction- and throughput-intensive workloads run without contention on the same storage resources. This is made possible through parallelizing data movement (with the 3PAR Gen3 ASIC and associated Data Cache) and metadata processing (using Intel CPUs and associated Control Cache) within each Controller Node.
In addition, the 3PAR Gen3 ASIC supports 3PAR Fast RAID 5, a unique capability through which 3PAR customers have reported achieving high levels of performance with 33 percent less storage capacity. The abundant memory bandwidth and built-in RAID 5 XOR engine within the 3PAR Gen3 ASIC allows 3PAR's Fast RAID 5 to deliver performance levels comparable to RAID 1 without the higher data protection overhead.
Building Block for Cloud Computing
As organizations build out their virtualized infrastructures to support the delivery of enterprise IT as a utility service via cloud and self-service computing models, they are turning to server virtualization, blade servers, and utility storage technologies. With its distinct architectural advantages, Thin Built In hardware, and superior performance, the InServ T-Class is purpose-built to meet the needs of these virtualized datacenters.
"With mounting interest in cloud and self-service computing as delivery models for enterprise IT as a utility service, it's increasingly important for organizations to build cost-effective and sharable virtualized IT infrastructures based on utility computing architectures," said David Scott, CEO of 3PAR. "With its unique, 'Thin Built In' architecture, the T-Class is a storage building block designed to do just this."
A wide array of companies are enabling a virtualized platform for cloud and self-service computing with 3PAR Utility Storage, including Brocade, Data Domain, Emulex, IBM, Microsoft, Oracle, QLogic, Red Hat, Riverbed, Symantec and VMware. A 3PAR T-Class partner quote sheet is available at www.3par.com/documents/3PAR-tcpe-qs-08.0.pdf.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.