May 19, 2010
Coming up with a list of the top ten threats and concerns in terms of security for HPC in the cloud is the easy part. However, putting that list in an specific order that is most relevant to an organization's overall goals is an exercise in prioritizing issues of not only security, but performance, governance, risks, and a host of other matters that can make this task far more complex than it might otherwise appear.
The top-tier security threats and concerns in cloud computing are not a hotly debated topic because there is already some wide consensus on topic. However, the priority in which these threats and concerns are ordered is another issue that is largely determined by the application, environment, infrastructure, needs, users and requirements, thus making it impossible to present a perfectly ordered-list that applies across the board.
With that said, the top ten security threats/concerns in cloud computing (with ordering dependent on individual use cases) can be widely recognized as:
1. Environment (which includes systems, resources, infrastructure, location).
2. I/O (which includes access, port assignments, login nodes, Interfaces/APIs).
5. Processes (which includes recovery, monitoring, awareness, auditing, certification).
6. Policies (which includes SLAs, regulatory compliance, sharing, segregation, privileges).
7. Trends (which includes collaborations and partnerships).
8. Risks (natural or unpredictable).
9. Malicious attacks.
10. Code/programming model.
All are viable concerns and/or threats that can impact performance and results, and must be taken very seriously. What we are now beginning to better understand is that many of the variables or factors (such as those noted above) are much more integrated and joined together than had been earlier anticipated or expected. As such, the "integration" requires us to reconsider some original thoughts on how to approach the efficiency, stability and dependability of a cloud, i.e., the simple transition or translation from a "normal" computing to a "cloud" computing environment may not be as trivial or as simple as previously thought. For certain applications, the cloud is ready; however, for others, there is a lot more work necessary to get to an acceptable outcome.
It is important to note that not enough attention is given to how a computing environment is used to meet the demands of the applications. In this era of optimizing costs while maintaining performance objectives, many organizations look to create a "one answer for all" solution in which a variety of different applications can be addressed utilizing certain resources. However, caution must be used in some cases because there are indeed limits to pushing the boundaries of the application and environment that can unfortunately create additional risks that could emerge.
It is believed that a "universal" environment may work for some classes of applications, but to expect that an environment can be simply interchanged "seamlessly" to address any kind of application is unrealistic, at least in the very near future. Cloud computing environments will go a long way in addressing some of the efforts that seek a "universal" solution for certain classes of applications; however, we must be practical in recognizing that we are nowhere close to an "all or nothing" solution.
Part of the challenge exists in that when applications and approaches to solving them are overly generalized, we end up with consequences far beyond our expectations because the environments that we are operating in were never originally designed to be used in certain ways, and as such, create openings or opportunities for access that can create significant security breaches. We must not try to push boundaries without full appreciation of the potential consequences. As was noted earlier, the "cloud" really requires that the focus be placed from top to bottom on all aspects, from the application to the environment, processes, policies and infrastructure without reservation.
Let's be clear that we do want to celebrate a lot of progress that has been made over the years that has enabled us to go much farther than we might have believed at this time, especially within the software, hardware and networking areas. The other side, however, requires that we fully recognize that the progress that we have so strongly sought to embrace in the improvements in computing have also made their way to experienced and committed hackers and mere thrill seekers intent on challenging security boundaries. It is the unfortunate times that we live in that the commitment to disrupt, change, alter, or inappropriately access a system or its data is as strong as the efforts to eliminate such behavior. In addition, an entirely new wave of threats that are directly related to improving the users experience and convenience through better automated processes has emerged, i.e., bots. Again, these types of potential major impediments remind us of the need to focus in a more integrated way on the seemingly disparate aspects of the application and environment that ensures that we closely observe and monitor how the software, hardware and networking interrelate from the beginning to the end.
It is very important to keep in mind the applications, for there are some applications that are readily available now to be addressed within a cloud computing environment. It is especially important to remember that our focus is on HPC applications in the "cloud," which involves some subtleties (e.g., multiple compilers, specialized batch queuing, message passing over a large number of participating nodes, specialized data requirements, etc.) that can significantly expand the challenges that may be found in some other applications. This is especially true in the context of scientific applications, which place much more demand not only on the focus on the systems and infrastructure of the computing environment, but also on the I/O performance.
A major issue of concern is the requirement of "trust" to ensure completion of a service or collaboration via access to lead to a desired outcome or conclusion. Major organizations that possess codes and data that are highly proprietary, sensitive and profitable cannot afford to wholeheartedly "embrace" the "trust" aspect that is inherently required because simply too much is at stake. Again, this is a primary reason that we believe that the focus on a "new" more comprehensive approach that focuses on the application, environment and requirements in HPC must be utilized.
For now, we have a functional list of the primary threats and concerns but the ordering is highly case-dependent. What may hold true about this list for the public sector will likely be dramatically different than the same sets of concerns of relevance to the enterprise.
What are your needs -- what is your environment and how does that change the way you might order the list? Such an exercise in ordering can be a beneficial process for the enterprise as it moves into more complicated cloud debates in terms of security.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.