February 27, 2012
Feb. 27 — Cloud computing has become completely ubiquitous, spawning hundreds of new web based services, platforms for building applications, and new types of businesses and companies. However, the freedom, fluidity and dynamic platform that cloud computing provides also makes it particularly vulnerable to cyber attacks. And because the cloud is a shared infrastructure, the consequences of such attacks can be extremely serious.
Now, with funding from the Defense Advanced Research Projects Agency (DARPA), researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) aim to develop a new system that would help the cloud identify and recover from an attack almost instantaneously.
Typically, cyber attacks force the shutdown of the entire infiltrated system, regardless of whether the attack is on a personal computer, a business website or an entire network. While the shutdown prevents the virus from spreading, it effectively disables the underlying infrastructure until cleanup is complete.
Professor Martin Rinard, a principal investigator at CSAIL and leader of the Cloud Intrusion Detection and Repair project, and his team of researchers aim to develop a smart, self-healing cloud computing infrastructure that would be able to identify the nature of an attack and then, essentially, fix itself.
The scope of their work is based on examining the normal operations of the cloud to create guidelines for how it should look and function, then drawing upon this model so that the cloud can identify when an attack is underway and return to normal as quickly as possible.
"Much like the human body has a monitoring system that can detect when everything is running normally, our hypothesis is that a successful attack appears as an anomaly in the normal operating activity of the system," said Rinard. "By observing the execution of a 'normal' cloud system we're going to the heart of what we want to preserve about the system, which should hopefully keep the cloud safe from attack."
Rinard believes that a major problem with today's cloud computing infrastructures is the lack of a thorough understanding of how they operate. His research aims to identify systemic effects of different behavior on cloud computing systems for clues about how to prevent future attacks.
"Our goal is to observe and understand the normal operation of the cloud, then when something out of the ordinary happens, take actions that steer the cloud back into its normal operating mode," said Rinard. "Our expectation is that if we can do this, the cloud will survive the attack and keep operating without a problem."
By closely examining the operations of the entire cloud and using that model to prevent attacks, Rinard's system should allow the cloud to independently detect and recover from new attacks, an operation that is impossible for current systems.
"By monitoring for behavioral deviations that are indicative of malicious activity rather than existing signatures, our system can detect and recover from previously unknown attacks," said Dr. Stelios Sidiroglou-Douskos, a research scientist at CSAIL.
For more information, see: http://groups.csail.mit.edu/pac/crs/.
About The Lab
The Computer Science and Artificial Intelligence Laboratory – known as CSAIL – is the largest independent laboratory at MIT and one of the world's most important centers of computer science and information technology research. The lab has played a major role in the technology revolution of the past 50 years. Currently, CSAIL is helping to invent the future in fields like organic computing, big data, artificial intelligence, computer security, and educational technology. CSAIL makes its home in the Frank Gehry-designed Stata Center on the MIT campus, and will celebrate its 50th anniversary in 2013.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.