October 04, 2010
Last week at the R Systems-sponsored HPC 360 event in Champaign-Urbana, Illinois, the focus was on the manufacturing sector with an expected emphasis on the value of modeling and simulation to drive competitiveness and growth. A secondary focus questioned how simulation-centered companies can look to utility or on-demand solutions to extend their ability to make best use of computational resources and improve efficiency.
While there were a number of manufacturing companies present, only a few were actually making use of virtualized or on-demand resources although there were several weighing their options. Among the host of attendees in the “investigative” category was Matt Dunbar, chief software architect for SIMULIA, the simulation brand for Dassault Systemes, which produces the finite analysis product suite Abaqus.
Software research and development arms like SIMULIA require a vast amount of computational resources to further enhance their product line but what happens when a company like Dassault Systemes runs out of power and cooling capacity and furthermore leaves developers waiting in long queue lines? And what happens when the on-site resources cannot deliver the 24/7 capability needed without requiring architects to have long wait times as their projects remain on hold?
Software architects eager to move forward with software research and development have to make a tough decision between either waiting in a long queue for post-processing in particular or, in turn, need to consider the viability of sending at least some workloads off-site.
As Dunbar stated, “doing actual batch simulation in the cloud is reasonably straightforwared but doing 3D graphics post-processing is something that remains a question mark for us. There are a number of ways we can do that, but right now we’re trying to decide how best to do that.” This is a difficult decision because software architects are either faced with waiting for a long time or taking what might be a performance hit with their use of utility resources versus their own, slightly more time-intensive (due to wait time) use of workstations.
Dunbar gave an overview presentation at the HPC 360 conference in which he discussed some of the challenges the company is facing as it ponders the decision to move post-processing into the cloud due to increasing restrictions and spent a few moments discussing some of his key points with us.
In Matt Dunbar’s view, “you have to come up with performance that’s equivalent to the workstation or come up with a way to handle post-processing” which echoes the sentiments of a number of other companies reliant on 3D processing to drive growth and further development.
Posted by Nicole Hemsoth - October 04, 2010 @ 7:37 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.