November 19, 2010
During SC10 in New Orleans, we had a chance to drop by a number of exhibits to check in with what’s going on for some vendors who are improving the HPC ecosystem and by proxy, the ability for improvements in cloud computing.
Interconnects are, as you can imagine, a rather big piece of this ecosystem that supports HPC and cloud, yet we often don’t spend enough time talking about them—and if we do, we tend to often focus on the one vendor in this space with the vast majority of the market share, Mellanox.
On Wednesday I dropped by the QLogic booth to have a chat with Joe Yaworski about the interconnects market as a whole and what elements of differentiation there are with such a market share imbalance.
To be more direct, I flat-out asked Yaworski how QLogic was different and what case studies there were to demonstrate that there are variations in performance or other factors.
His response was that since QLogic’s point of differentiation is that it did not retrofit its products with MPI on top, which others did because in the beginning, InfiniBand was originally designed to become the datacenter backbone replacement for Ethernet and fiber channel. In other words, it had a rich set of features and capabilities that had nothing to do with HPC. However, once InfiniBand found its niche in HPC, QLogic stepped up to design InfiniBand products that were MPI-targeted from the beginning, thus eliminating any hitches that might have existed due to the retrofitting. His argument is that the messaging rate is thereby superior and that this was the reason why they were chosen for a large-scale implementation at Lawrence Livermore.
Here we have Mr. Yarowski providing more details on the above points…
While on the surface, this conversation might seem to have little to do directly with clouds, it is worth noting that there are some areas of possible differentiation in this market that might exist—and the more improvements on interconnects that emerge means that the possibility for more finely-tuned cloud computing capabilities could exist. Mellanox, for instance, often sees this connection and produces news releases around it but oftentimes QLogic steers clear of cloud tie-ins, at least relative to its much larger and pervasive competitor.
More from Joe on the Livermore connection...
This is an interesting market to watch, especially since the problems that it needs to solve to improve latency have an incredible bearing not only for HPC in general, but for related uses in cloud computing capabilities for high-performance computing applications.
Posted by Nicole Hemsoth - November 19, 2010 @ 3:08 AM, Pacific Standard Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.