September 10, 2012
Starting with the good news: We are currently looking at about seven or so years of successful implementation and deployment of cloud computing, seven years! Although there seems to be a lot of hype in this young and still immature field of cloud computing, this hype is mostly seen in the press and in the layperson's view. In contrast, our science, engineering and business communities are moving forward into clouds, in big steps, driven by the many benefits and factors like virtualization and easy access, and accelerated by an ever increasing number of cloud use cases, success stories, growing number of cloud start-ups, and established IT firms offering cloud services. And many users are often not aware that the services they use today are sitting right in the cloud. They enjoy cloud benefits, like business flexibility, scalability of IT, reduced cost, and resource availability on demand and pay per use, at their finger tip. So far so good!
Looking closer into the current growing cloud offerings and use of clouds in research and industry, we anticipate a whole set of barriers to cloud adoption. To name a few, major ones: lack of trust into the service providers which is caused mainly by security concerns; the attitude of 'never change a running system'; painful legal regulations when crossing political boundaries; existing software licensing models and cost; and securing intellectual property and other corporate assets. Some of these issues are addressed currently by the Uber-Cloud Experiment.
And, another cloud challenge arises at the horizon, beyond the current state of the mega-providers' monolithic clouds: with more and more cloud service providers, with richer and deeper cloud services crowding the cloud market, in the near future, how do I get my data out of one cloud to continue processing it in another cloud, to support e.g., workflow or failover? Or, how does an independent service provider (or cloud broker) interconnect different services from different cloud providers most efficiently? Such scenarios are common, for example, with federated (Web) services, which consist of different service components sitting in different clouds. How do I manage such a cloud workflow? How do I monitor, control and manage the underlying cloud infrastructure and the complex applications running there? How far can I get with least manual intervention, plus taking into account user requirements and service level agreements?
These important topics are covered in a new book, Achieving Federated and Self-Manageable Cloud Infrastructures: Theory and Practice (2012), by Massimo Villari, Ivona Brandic, and Francesco Tusa. In 20 chapters, written exclusively by renowned experts in their field, the book thoroughly discusses the concepts of federation in clouds, resource management and brokerage, new cloud middleware, monitoring in clouds, and security concepts. The text also presents practical implementations, studies, and solutions, such as cloud middleware implementations and use, monitoring in clouds from a practical point of view, enterprise experience, energy constrains, and applicable solution for securing clouds.
And that's what makes this book so valuable, for the researcher, but also for the practitioner to develop and operate these cloud infrastructures more effectively, and for the user of these clouds. For the researcher it contributes to the actual and open research areas in federated clouds as mentioned above. For the practitioner and user it provides real use cases demonstrating how to build, operate and use federated clouds, which are based on real experience of the authors themselves, practical insight and guidance, lessons learned, and recommendations.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.