October 06, 2010
President and CEO of Nimbis Systems, Dr. Robert Graybill took a few moments to discuss the concept of the missing middle with HPC in the Cloud during the R Systems-sponsored HPC 360 event in Champaign-Urbana last week. While the notion of who this critical mass is has already been well-defined, hearing a member of the community who has worked to deliver critical resources to that mass certainly has value.
During the first part of our discussion, which was not presented here, the CEO of Nimbis Services stated that core concept behind the company was reaching this audience—this group of users would required HPC resources yet lacked the ability to secure access, either due to financial constraints on the hardware and software licensing front, lack of expertise, or any other host of reasons that are so often cited in explanations about why this middle has yet to be fully engaged.
Graybill took the opportunity to brief me on the company’s newest product offering, which is called HPC Workbook. He notes that this “offers the missing middle the opportunity to run their Excel spreadsheets on a supercomputer by simply logging in and with a credit card or PayPal account, have access to virtual computing nodes or physical nodes for a week or day at a time, and suddenly be in an HPC world with the click of a mouse.”
While it might feel a little sacrilegious to use the words “PayPal” and “supercomputer” in the same sentence and yes, while it also might feel like the relative ease could be overstated, the company has seen some significant traction with its Cloud Mathematica product that provides complex software that can be delivered via EC2 or using R Systems as a high-performance computing resource provider.
Graybill has been instrumental in debates about HPC “outreach” and spreading greater capacity to a wider audience. Hearing him speak during his presentation and talking with him during a more casual chat was a highlight of the trip to Champaign-Urbana--although not quite on par with watching my Buckeyes win on Saturday.
Posted by Nicole Hemsoth - October 06, 2010 @ 1:42 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.