October 05, 2010
Tom Statchura from Intel attended the IDF 2010 in San Francisco this year where he aided in the demonstration of an ideal scenario for HPC in the cloud—what is most frequently referred to as “bursting” to gain additional capacity. This demo highlighted what is probably the best use case of all for many HPC users—those who have overextended their local cluster capacity and need to be able to burst beyond to secure much-needed resources.
In the demonstration, “10 GbE iWARP was used for the local cluster as the performant low-latency fabric. Mainstream 10 GbE was used in the cloud as it provides the dynamic virtualization and unified networking features required for virtual data centers.
As time marches on and more networking-centered companies are seeing value in the cloud for users of high-performance computing resources, one can only expect that there will be further developments to make the move and use of cloud-based HPC resources less threatening on the security front. Although Intel is not the first company that comes to mind in this arena, the demonstration (not to mention handy imagery provided) shows how there are some changes on the horizon that might be beneficial for those who work between the physical and virtual spaces.
As one might imagine, since it’s Intel’s demo, there is certainly some serious product marketing going on, but it’s nonetheless an important demonstration about the increasing (relative) ease of migrating certain workloads into a public cloud.
Full story at The Server Room
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.