February 13, 2012
Feb. 13 — The International Workshop on Cloud Technologies for High Performance Computing (CloudTech-HPC) will be held in Pittsburgh, Pa., September 10-13, 2012, in conjunction with the 41st International Conference on Parallel Processing (ICPP 2012).
High-performance computing (HPC) involves the design of system architecture, tuning of software environment, construction of mathematical models and numerical solution techniques for solving scientific, social scientific and engineering problems. It often requires a huge number of computing and storage resources to perform large-scale analyses and experiments. In general, different scientific experiments also have to be reorganized or redesigned their computing models for making use of the HPC system.
Cloud computing can address many of the aforementioned concerns. By means of cloud technologies, users can access to massive computing and storage resources and can customize their computing environment for offering a variety of services. Moreover, by using the pay per use model, users can scale up or down the computing infrastructure according to their requirements and budget without deploying actual physical infrastructure. They could get access to their own virtualized HPC cluster in a self-service, dynamically scalable and fully configurable environment. Such a computing paradigm provides a flexible high performance computing environment.
However, there is some performance gap between cloud computing and the traditional HPC environments. Therefore, the performance of running the high performance applications in clouds comparing with the traditional HPC infrastructure should be a concern. The goal of this workshop is to bring together researchers and practitioners in cloud technologies for high performance computing. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop.
Scope and Interests
All papers related to cloud technologies for high performance computing are welcome. We cordially invite presenters to submit manuscripts for any application domain as long as the core of the manuscript belongs to, but not limited to, the following topics:
Submission Deadline: March 09, 2012
Authors Notification: June 08, 2012
Camera ready Due: July 06, 2012
Prepare your workshop paper with the IEEE standard double column format no more than 6 pages in the PDF file format. Submit your paper(s) at the EasyChair submission site: https://www.easychair.org/conferences/?conf=cloudtechhpc2012
All submissions should describe original, previously unpublished researches, not currently under review by another conference, journal or workshop. Each submission should be assumed that, if the paper is accepted, at least one of the authors must attend the workshop to present the work in order for the paper to be included in the Proceedings of ICPP-2012 Workshops that will be published by the IEEE CS Press.
Rajkumar Buyya, University of Melbourne, Australia
Franck Cappello, INRIA & UIUC, France
Yeh-Ching Chung, National Tsing Hua University, Taiwan
Manish Parashar, Rutgers, The State University of New Jersey, USA
Ching-Hsien Hsu, Chung Hua University, Taiwan
Kuan-Chou Lai, National Taichung University, Taiwan
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.