February 07, 2012
Feb. 7 — The Workshop on Science Applications and Infrastructure in Clouds and Grids (http://www.ogf.org/SAICG) will take place in Oxford, England, March 15-16, 2012.
Science in general continues to make increasing use of advanced computing methods to process and visualize data, to perform simulations for comparison with expensive or difficult experiments, to extend the reach of theory beyond accessible experimental ranges, and to mine results from large collections of complex data. The "Science Applications and Infrastructure in Clouds and Grids" workshop to be held in conjunction with Open Grid Forum's OGF 34 meeting will address many of these important topics.
Cyberinfrastructures and e-infrastructures are being used to carry out intensive computations and data processing in ways that support individual researchers, and also in ways that enable collaborations between researchers. In addition to traditional grid computing methods, clouds are increasingly being used to broaden and extend the range of tools used to meet demands for computing and data services.
Previous workshops in this series, as described below, have been used to explore the high-performance range of cloud and grid applications and to discuss science agency uses of clouds and grids. The purpose of this workshop is to investigate cloud and grid framework software efforts and applications in greater detail, with focus on the following questions:
We invite prospective participants to submit brief abstracts, on the order of one paragraph, on any of the related topics from the above list to the workshop organizers, and to request special topics for consideration if so inclined. We are also interested in presentations on forefront applications and/or framework infrastructures useful in clouds and grids in support of science application areas. These will be considered for acceptance for a short (order 20 to 30 minute) presentation at the workshop, to be followed up by an optional short position paper to be published in the workshop report.
This workshop is a follow-on in the series started by two previous workshops on High Performance Applications of Cloud and Grid Tools held in April, 2011 and Science Agency Uses of Clouds and Grids held July, 2011.
Deadline for submission of abstracts is Feb 24, 2012.
The workshop will be free to attend but we ask you to register beforehand. It is expected that the workshop day will be scheduled for Thursday March 15, possibly extending to the morning of Friday 16th, depending on the number and quality of submissions.
Funded by the UK Engineering and Physical Sciences Research Council (EPSRC), supported by the e-Research centre at the University of Oxford (OeRC), and organised by the Science and Technology Facilities Council (STFC). We acknowledge the role of the US Department of Energy Office for Advanced Scientific Computing Research, Internet2 and the SIENA project in supporting previous workshops in this series.
Source: Open Grid Forum
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.