October 19, 2010
The Handbook of Cloud Computing which was just released by Springer Publishing provides insights from experts in cloud from academia, laboratories and enterprise and covers a broad range of issues; from the high-level approach to understanding of cloud as concept and practical model to in-depth conversations about high performance computing in the cloud and data-intensive supercomputing applications in a virtualized or on-demand environment, it leaves few stones unturned.
There are a number of books that have come to market this year alone that attempt to tackle the complex topic of cloud computing, but most of them, at least from a cursory browse are either far too general and one-size-fits-all in their approach or they are extremely niche (i.e., simply about CRM, BPM, SOA, etc.).
The books that offer the “big picture” but still manage to branch out to all applicable areas are not easy to come by, but at over 600 pages and supplemented by chapter subheadings that include, “Scientific Data Management in the Cloud” and “High Performance Computing on Competitive Cloud Resources” not to mention a number of case studies, it could very well be one of the more valuable publications for HPC cloud folks this year.
The publisher states that what they’ve released is, “a reference book intended for advanced-level students and researchers in computer science and electrical engineering” and that it can also be “beneficial to computer and system infrastructure designers, developers, business managers, entrepreneurs and investors within the cloud computing-related industry.”
The book was edited by Armando Escalante, CTO of LexisNexis Risk Solutions and Dr. Borko Furht, Chair at the Department of Computer Science and Engineering at Florida Atlantic University.
Full story at Florida Atlantic University
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.