October 04, 2012
MANNHEIM, Germany, Oct. 4 — A final number of 150 high-performance computing (HPC), Cloud and Big Data users from academia and industry, along with technology and service providers gathered for two days, on September 24 and 25 in Mannheim, for the third international ISC Cloud’12 conference. Once again costs, performance and security were high on the agenda for new users.
Another very refreshing addition to this year’s program were the manifold end-user experiences, and lessons learned with designing, building, managing and using cloud to facilitate research and big data processing demands. The opening keynote was delivered by Bob Jones from CERN, who gave an overview of CERN’s big data requirements and the progress of the EU Science Cloud, Helix Nebula. The project is shaping up nicely as the current two-year pilot phase continues. CERN is one of the three flagship users in addition to the European Molecular Biology Laboratory (EMBL) and the European Space Agency (ESA). The three demand-side partners were picked expressly because of the scope of their research and computing requirements.
The ISC Cloud General Chair, Wolfgang Gentzsch opened the tight-paced conference by emphasizing the ISC Cloud conference as an important HPC Cloud community building initiative, and by sharing the initial results from his Uber-Cloud Experiment project, which brings various primary stakeholders together to promote the adoption of HPC in the cloud and with it, deliver the intended benefits of innovation and increased competitiveness to small-to-mid size enterprises.
The attendees found it valuable to have been part of a crowd that is closely interested in this increasingly visible segment of HPC. A survey conducted during the conference revealed that more than 85 percent of the attendees considered the topics and issues presented at Cloud’12 as very relevant. Besides, 43 percent were motivated to attend due to the extensive conference program, and close to 66 percent of attendees appreciated the opportunity to participate in the vendor-user dialogues. The evening reception in the vineyard of Dr. Bürklin-Wolf in the famous Pfalz wine region offered a further opportunity to network with other participants and to continue the thematic discussions, also with the experts.
According to Wolfgang Gentzsch, the steering committee this year had a much larger choice of projects to present as research and industrial best-practices, compared to the last two years. “In 2010, we had to rely mostly on vendor case studies and I was quite disappointed myself about the result, but in 2011, we saw some more cloud providers, like SGI Cyclone, Penguin POD and Bull extreme factory as well as European Commission funded cloud projects and early adopters in industry, allowing us to select great speakers with hands-on experience on real applications in the cloud,” said Gentzsch. “Therefore, we, as organizers expect an increasing interest in next year’s ISC Cloud Conference.”
This year's conference highlights were definitely the Helix Nebula Cloud project, the industry success stories in the cloud such as rendering, climate, screening and big data. The engineering software in the cloud touching on new on demand pay-per-use licensing models, like Ansys, CD-adapco, ESI, and Simulia also caught the interest of attendees. Finally, the novel BoF sessions on reference architecture, applications/software in the cloud, and data transport gave the attendees the opportunity to break into three groups to engage in lively discussions. For the BoF session summaries, please contact our PR Manager, Nages Sieslack firstname.lastname@example.org.
Please find attached some images from the conference. For the full selection, please visit http://goo.gl/HhTZb.
About ISC Cloud’12
Organized by ISC events, the ISC Cloud’12 conference brought together leading experts in HPC in the cloud from around the world presenting valuable information about their own experience with focus on compute and data intensive applications, their resource needs in the cloud, and strategies on implementing and deploying cloud infrastructures. The next conference will take place in September 2013. Please visit http://isc-events.com/ for information on the next conference and other events.
Source: ISC Cloud'12
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.