December 02, 2008
FREMONT, Calif., Dec. 2 -- 3PAR
As energy demands have grown, the US Department of Energy has increased investment in cutting-edge research to develop sustainable energy sources such as fusion. The LLNL-NIF is home to the world's largest laser, which was designed to recreate the same fusion energy process that makes the stars shine and provides the life-giving energy of the sun. With the growing importance of alternative energy to the economy, environment, and global political climate, the NIF could not afford to be locked into inflexible and poorly performing storage technologies to support their fusion research program. In building out a storage infrastructure, the NIF sought storage that would not only reduce up-front costs, but also streamline storage administration, increase application performance, improve availability, and ensure superior scalability moving forward.
After evaluating products from traditional storage area network (SAN) vendors, the NIF selected two 3PAR InServ Storage Servers, each with over 100 terabytes of thin provisioned capacity. Unlike the NIF's previous storage environment -- which did not meet the research program's performance, uptime, and availability requirements -- the 3PAR InServ arrays with 3PAR Thin Provisioning software enabled the NIF to purchase and deploy 60 percent less capacity as compared to storage products from traditional vendors. At the same time, capacity reductions made possible by the InServ have collapsed the NIF's storage footprint by 4-to-1 for additional savings related to facilities costs and the energy required to power and cool their datacenter. With 3PAR, the NIF has also increased their availability and uptime to 99.999 percent, increased performance levels for mission-critical applications, decreased service fulfillment from days to hours, and decreased the time to provision storage from hours to minutes.
"Agility is mission-critical to our fusion research program," said Travis Martin, IT operations lead at LLNL-NIF. "With 3PAR Utility Storage we can support the growing performance and capacity needs of our research as we work to change our world through harnessing the energy of the stars."
The 60 percent reduction in storage capacity and 80 percent increase in administrative efficiency achieved by the NIF were made possible through innovative 3PAR hardware and software such as 3PAR Thin Provisioning, software that permits the one-time allocation of virtual capacity while consuming physical capacity only for written data. With the NIF's previous storage environment, capacity had to be purchased and allocated up-front, before it was actually required for written data, which led to underutilization and wasted capacity. The autonomic, "set it and forget it" approach of 3PAR Thin Provisioning has also dramatically reduced storage and system administration for the NIF, which has used 3PAR to eliminate imprecise capacity planning. The NIF now provisions storage in a fraction of the time required by their previous environment. As a result of this efficiency gain, the NIF has become significantly more agile in response to the rapidly changing demands of the country's energy research program.
The InServ's native tiered-storage support has also allowed the NIF to reduce capacity and administration by consolidating applications with varying service level requirements onto the same array. Instead of maintaining different arrays for various storage service levels, the NIF has been able to mix premium Fibre Channel and lower-cost Enterprise-class Serial ATA (SATA), or "Nearline," drives within the same array. In addition, the NIF has used 3PAR Fast RAID 5 to achieve performance which approaches that of RAID 1, but with 33 percent less capacity and one-third the number of disks that must be housed, cooled, and managed. With 3PAR Utility Storage, the NIF only requires one storage administrator to oversee their entire storage environment.
"It can be argued that it is absolutely critical to the future of our country and our planet that we pursue the development of sustainable, alternative energy sources," said Jeffrey Hill, senior research analyst in the data management and storage practice at Aberdeen. "For this reason, research programs in this high-growth field are looking for innovative ways to accelerate development and effectively scale unpredictable and changing data storage requirements more cost-effectively. Next-generation utility storage from 3PAR is ideally suited to meet these needs."
"As vigorous proponents of sustainability, energy efficiency, and green storage technologies, we are very pleased that the NIF has chosen 3PAR Utility Storage to accelerate their research into harnessing the power of the stars to reduce our fossil fuel dependence here on Earth," said David Scott, president and CEO of 3PAR.
Researchers from the Suddhananda Engineering and Research Centre in Bhubaneswar, India developed a job scheduling system, which they call Service Level Agreement (SLA) scheduling, that is meant to achieve acceptable methods of resource provisioning similar to that of potential in-house systems. They combined that with an on-demand resource provisioner to ensure utilization optimization of virtual machines.
Experimental scientific HPC applications are continually being moved to the cloud, as covered here in several capacities over the last couple of weeks. Included in that rundown, Co-founder and CEO of CloudSigma Robert Jenkins penned an article for HPC in the Cloud where he discussed the emergence of cloud technologies to supplement research capabilities of big scientific initiatives like CERN and ESA (the European Space Agency)...
When considering moving excess or experimental HPC applications to a cloud environment, there will always be obstacles. Were that not the case, the cost effectiveness of cloud-based HPC would rule the high performance landscape. Jonathan Stewart Ward and Adam Barker of the University of St. Andrews produced an intriguing report on the state of cloud computing, paying a significant amount of attention to the problems facing cloud computing.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.