October 11, 2010
The last year alone has produced a swell of news about universities taking steps to implement private cloud solutions in order to manage increasing complexity driven by the data deluge and to increase operational, cost, and environmental efficiency. Oftentimes, these efforts are spawned on the departmental level and branch outwards after proven success. However, for large universities with diverse inter-school departments like Harvard Medical School, implementation of private clouds has to be highly orchestrated due to the mosaic of individual needs for computation-driven research.
Harvard Medical School (HMS), in its efforts to become more efficient has been taking a close look at cloud computing to solve some of their big data issues, particularly in terms of managing the petabytes produced from the network of hospitals and research centers within the HMS system.
While HIPPA compliance matters prevent the school from permitting patient data to filter in and out for analysis, Harvard’s healthcare center is crunching data from next-generation sequencing projects as well as high-resolution imaging data. This, coupled with the sporadic demands for computational resources across its network of partner institutions, required a move toward consolidation.
Both before the centralization of its IT and post-private cloud, HMS has maintained a comprehensive high-performance computing infrastructure to accommodate the large number of researchers from within the university system as well as outside partner hospitals and healthcare institutions. All of this is managed by Dr. Marcos Athanasoulis, the IT director for Harvard Medical School.
This is no trivial exercise in IT management; Athanasoulis, as IT director, leads the development of HPC infrastructure and extends support for research in healthcare and life sciences as well as oversees student computing initiatives. With such a tall order on the compute and management level, it is no surprise that centralizing as many aspects of on-site infrastructure as possible has become a priority.
The View from the Top
In Sramana Mitra's recent detailed interview with the IT director, the oftentimes distinct elements of research and private cloud business models were joined to present a total view about how university systems can make practical research and business use of their infrastructure. Mitra directed several questions about HMS efforts to extend healthcare and life sciences research into the cloud in order to improve IT efficiency, both on a research and cost level to highlight the school’s recent cloud successes.
In the interview, Athanasoulis revealed insights not only about cloud computing for healthcare research at a large medical school, but also how private clouds can be operated as core strategic business models—even for a “non-enterprise” use case like Harvard Medical School.
The motivation behind the cloud program was not unlike the impetus for other research centers that have a large number of applications and codes running from different research groups. As Athanasoulis explained, “It is not efficient for everyone to deploy infrastructure independently and manage it as it grows. It is more cost effective to have centralized infrastructure. In essence, we provide HPC capabilities in terms of research software, storage in the petabyte range, and other services related to helping them get research done, which includes everything from looking at modeling and simulations to trying to find the cure for cancer.”
In terms of the implementation, Athanasoulis noted that what started as a small pilot with some funding has proved successful. “We have gone from the cloud having something like a few hundred processors to over 2,000 processors today.” The school has also just been given an IAH grant of roughly $4 million to expand their internal cloud.
Strategically speaking, HMS partnered with Platform Computing (case study details of the HMS implementation here) to help manage the complexities of their private cloud, IBM for their BladeCenter servers, and Cisco for network fabric provisioning. The university also made use of a smaller company called Isilon to manage their high-performance network attached storage (NAS).
As Athanasoulis stated, in order for the numerous research initiatives to be more efficient the school began an investigation of clouds around five years ago—before the “term” cloud was tossed around as freely as it is today. At that point, Harvard Medical School was looking at the benefits of a heterogeneous computing environment.
IT leaders at HMS decided that since they had a number of disparate researchers with many different codes and applications running, they needed to create an infrastructure that would also allow faculty to have additional capacity without any expenditure in further hardware for additional labs. This cloud would, in theory, allow the school to sell additional capacity instead.
Anathasoulis detailed the model of the internal cloud on a business level, explaining how the school permits faculty, staff or departmental researchers to purchase nodes that “become part of the cloud, so if they want to purchase 500 CPUs worth of capacity that gets put into the cloud, they get guaranteed access to that capacity. What this means is that when they want to use it they can have it. They don’t need to preempt any work that’s running on the cloud at that point of time. But if their capacity is idle, then other people’s jobs can run on that idle piece of hardware.” Researchers can also “take away” their own nodes and infrastructure if they desire and use these as standalone servers in their labs.
Harvard’s model for private clouds is a functioning example of both a cloud business and research model. With such an approach, HMS supports the research of the scattered departments by delivering resources on-demand on a more or less pay-as-you model, the only difference is that the “customers” in this model as using university funds to pay into a university-supported effort.
Carrying the HMS Example into the Private Sector
While the HMS private cloud example can teach a number of lessons to other large universities with overwhelming data demands and the IT management staff to govern such an undertaking, carrying this over into the enterprise context gets tricky. This becomes even trickier when private sector R&D firms are dealing with the same types of data that HMS is—life sciences information. For many life sciences companies, compliance issues are a potential show-stopper (as Bruce Maches explores in depth on a regular basis).
So while universities can look to Harvard Medical School as an example, the question is how life sciences and biomedical researchers who are not affiliated with university systems take this case study and run with it. A better question yet: is the cloud model, particularly on the hybrid level, what it needs to be to allow for such dynamic use of IT resources?
In addition to posing some poignant questions for the HMS Director of IT, Mitra discussed some of the interplay between the enterprise and private sector use of clouds for research and development. She notes that there are challenges preventing an easy interplay between research efforts and infrastructure, particularly in terms of the ability to move between public and private clouds. For non-university researchers, life sciences research is hindered by compliance issues, which means that use of public clouds, even for bursting, is not possible without more focused effort on creating a more seamless ride between the two varieties of clouds.
In Mitra’s view, “The trend that we spot in the healthcare vertical is that private clouds will have a big role to play given the security and legal considerations involved in dealing with healthcare data. Life sciences would be more open to cloud bursting using public clouds once data security and access speed issues are dealt with and once interfaces become easier for researchers. The bridging of private and public cloud seems to be another area that is a blue-sky market for healthcare, especially in life sciences research”
While HMS offers a solid case study, there is a certain level of maturity that the cloud will need to reach before this can be broadly adopted according to Athanasoulis, Mitra and others. Still, it is worthwhile to pay attention to this still-evolving case study to see what miracles of modern medicine might come tumbling out of this particular cloud.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.