August 25, 2010
The IBM X-Force team, which the company describes as the "premier security research organization within IBM that has catalogued, analyzed and researched more than 50,000 vulnerability disclosures since 1997" seems to me to be very much like like the X-Men; a protected group of particularly gifted IT superhero types who are cloistered in majestic, isolated quarters until they are needed by the masses.
A group of freakishly-talented protectors that live to hone their gifts in secret, defending us from threats we never see, able to relish the joys and tragedies of their societal burden only from the confines of their own community of special people as the world spins on, oblivious to the dangers that have been squelched as we dreamed and vacuumed our dens.
I could be wrong about that, but I prefer to think of things in these ways so please don't ever tell me otherwise. If nothing else, it makes writing about IBM's take on cloud security much more interesting.
In their seek and destroy (well, probably more like discover and analyze) missions for the sake of protecting the cloud that so many have come to appreciate, the X-Force team has released its Trend and Risk Report for the first part of the year, stating that there are new threats emerging and virtualization is a prime target.
Of the cloud, X-Force stated today, "As organizations transition to the cloud, IBM recommends that they start by examining the security requirements of the workloads they intend to host in the cloud, rather than starting with an examination of different potential service providers."
IBM is making a point of warning consumers of cloud services to look past what the vendors themselves are claiming to offer and to take a much closer glance at the application-specific security needs. Since security (not to mention compliance and other matters related to this sphere for enterprise users) depends on the workload itself, this is good advice, but when the vendors, all of whom are pushing their services, discourage this by taking a "we'll take care of everything for you" approach, it's no surprise that IBM feels the need to repeat this advice.
The X-Force team also contributed a few discussion points about virtualization and a multi-tenant environment, stating that "as organizations push workloads into virtual server infrastructures to take advantage of ever-increasing CPU performance, questions have been raised about the wisdom of sharing workloads with different security requirements on the same physical hardware."
On this note, according to the team's vulnerability reports, "35 percent of vulnerabilities impacting server class virtualization systems affect the hypervisor, which means that an attacker with control of one virtual system may be able to manipulate other systems on the same machine."
The concept of an evil supergenius attacking the hypervisor and creating "puppets" out of other systems is frightening indeed and there have been some examples of this occurring, although not frequently enough (or on a large-enough scale) to generate big news. However, this point of concern is enough to keep cloud adoption rates down if there are not greater efforts to secure the hypervisor against attack.
IBM recommends that enterprises plan their own strategy with careful attention to application requirements versus reliance on vendor support.
I recommend a laser death ray.
Posted by Nicole Hemsoth - August 25, 2010 @ 8:14 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.