August 09, 2010
Supercomputing in the cloud and rent-a-cluster services could save the day for the economy as an article in the San Francisco Chronicle stated this morning. Companies and research sites with limited funds for investments in new hardware, especially the big iron required for an evolving series of HPC-type applications, are looking to rented infrastructure versus up-front investments. This is good news for the vendors, naturally, but it could also prove to be of enormous value for those who need the capacity without the initial expense.
To lend a case study, the article points to the Ohio Supercomputer Center’s (OSC) Blue Collar Computing drive, which rents supercomputer capacity to business and research shops who might have otherwise been barred from entry due to high capital investment costs. Many types of companies take advantage of the Blue Collar Computing initiative to date and several more “have also used the services by accessing them through OSC partners like the Edison Welding Institute (EWI)” which is a non-profit organization that provides access to E-Weld Predictor. This application, which is accessed via a web interface allows users access to the OSC’s 1.650-node IBM “Glenn” cluster to run simulations of complicated welding tasks—a possibility that shaves weeks off the typical time it would otherwise take using standard in-house workstations.
Outside of renting time via established supercomputing centers, there are several companies that specialize in these same services, including SGI with its Cyclone product, Cycle Computing’s cloud and cluster rental options, Penguin Computing’s POD service and those offered by Sabalcore and a handful of others. Times are good for these companies on the revenue front and if the Chronicle’s article is correct, this good news could extend to companies who now have more options for harnessing greater compute power than possible without significant upfront investment.
Full story at San Francisco Chronicle
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.