October 27, 2010
A number of universities have been offering a range of virtualization-centered courses, both for their graduate and undergraduate students but more are reaching out to the business and admin community to build attendance.
Just recently, Harvard University announced that it too will be providing a “hands on” learning opportunity using Amazon’s EC2 on its campus beginning in early January.
The seminar will address migrating one’s infrastructure to the cloud in addition to discussing how to virtualize local infrastructure using a range of proprietary and open source software. Other goals include describing in detail the technical aspects of virtualization and what it means in terms of scalability, cost, and performance, thus providing a springboard for attendees to determine if the cloud makes sense for their own specific infrastructure and needs.
Despite the fact that there are some basics covered, the class is not necessarily a “beginner’s guide to cloud” but instead is hoping to find CTOs and IT managers, system administrators and instructors who want to start using the cloud to deploy a range of large technical computing projects. The course’s organizers put it simply, “to participate, you should be comfortable with command-line environments.”
The hands-on cloud course will be taught by Dr. David Malan, who teaches courses in Harvard’s Computer Science Department as well as other classes in the School of Engineering. In addition to his teaching and research work that is focused on pattern detection within large datasets, Malan is also the founder of startups, including Diskaster.
To get a better idea about how a course can address a disparate audience made up of those from industry and academia, I asked Malan a few questions via email.
HPCc: Can you please provide an overview covering why this particular topic (cloud computing) has been selected? In other words, were you receiving a large number of requests for this topical focus? -- In short, what was the impetus?
Malan: Even though "cloud computing" is perhaps one of the most overused buzz words of late, underlying the trend are some very interesting technologies, including multi-core processing and virtualization. We plan to focus, in very real terms, on what cloud computing actually is so that students exit the course armed with both a conceptual framework and some practical experience.
HPCc: How will you define cloud for attendees? To what degree is virtualization key to the course focus--and to what extent will you offer details about on-demand resources, grid, and the "history" of cloud--i.e., how it spawned from grid and similar on-demand models.
Malan: Although it once represented, often in cartoon form, any network beyond one’s own LAN, "the cloud" now refers to on-demand computational resources whose provision usually relies on virtualization. In more real terms, cloud computing means that users, companies, and even courses can pay for access to servers when and only when they actually need them. Those servers just so happen to be virtual machines (VMs), otherwise known as virtual private servers (VPSes), that live alongside other customers’ VPSes on hardware owner by a third party.
HPCc: There are a number of challenges inherent to the cloud; to what degree will you address these (migration, security, etc.) and which ones seem to be the most problematic/worthy of discussion?
Malan: We will certainly discuss issues that arise in the outsourcing of one's infrastructure. Students' interest and questions will ultimately dictate how much time we're able to spend on each topic.
HPCc: I noticed that in your brief description of topics to be covered Amazon's EC2 is the only public cloud platform you'll be discussing. Will you touch on others or do you have an arrangement with Amazon?
Malan: We just so happen to have worked with Amazon EC2 in the past and can speak to our own experiences with their infrastructure. EC2 is also perhaps the most versatile of the available options today and should certainly be vetted by anyone considering cloud services.
The course runs from 9-5 p.m. on January 6th with a “tuition” cost of $950. Details can be found here.
Posted by Nicole Hemsoth - October 27, 2010 @ 1:21 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.