April 06, 2011
The University of Texas has set about building its third data center, which officials expect will open later this year. Like the other two, this one will have the capacity to contain almost three petabytes—a perfect fit for the genomic research project it is slated to handle.
The university system is home to the MD Anderson Cancer Center, which, like other organizations untangling genomic-driven problems, generates and requires quick access to incredible amounts of data.
The center, which is tackling some cutting-edge work in the genomics-cancer arena, will require some significant number-crunching capabilities. IDG reported that the center will create the largest HPC cluster dedicated to cancer research in the world.
Lynn Vogel, CIO of MD Anderson in Houston states that this effort is being fueled by an incredibly large private cloud on the order of 8,000 processors and a half-dozen shared large memory machines with hundreds of terabytes of data storage attached.
As IDG reported, while the research center’s “general server infrastructure uses virtualization, the typical foundational technology for cloud, this specialized research environment doesn’t. Rather, the organization uses an AMD-based HPC cluster to underpin the research cloud.” In order to tap into the resource, they use a SOA-based web portal aptly named ResearchStation.
Vogel noted that currently, the 8,000-processor HPC sitting at the heart of the private cloud already is operating at 80-90% of capacity as did the setup that came before it, which weighed in at the 1,100-processor count. On the storage front it will make use of an HP-Ibrix system that supports extreme scale-out, he explained.
Interestingly, the group behind the research did briefly consider some public cloud alternatives but there were problems that extended beyond the usual suspect when dealing with patient data. According to Vogel, “we’ve found on performance, access and in the management of that data, going to a public cloud is more risky than we’re willing to entertain—and we’re just not comfortable with the cloud given the actionable capability of a patient should there be a breach.”
Vogel also noted that there is an important angle missing from public cloud providers in the way of understanding of the complexity of their data and goals. He says “As much as public cloud providers would like us all to believe, this is not just about dumping data into a big bucket and letting somebody else manage it.”
Full story at IDG
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.