August 17, 2010
One of the most salient arguments about supercomputing in the cloud is not particularly difficult to disagree with in the context of the cost of building and then maintaining what could qualify as a supercomputer. As Jeffrey Clark noted today in The Datacenter Journal, “as long as the expenses associated with using the cloud to perform supercomputing tasks do not quickly approach the cost associated with implementing an in-house supercomputer, the risks (financially, at least) are minimal.”
At this point, even given the complex management software and other initial application and other costs associated with moving once in-house operations to the cloud, this concept of the cloud being out-priced by the physical infrastructure is not a concern in the slightest. What is worrisome for cloudy ones with the price advantage, is convincing supercomputer and HPC users to entrust their workloads to an “untrusted technology.”
While many will argue that cloud is, first of all, not a technology to begin with and that furthermore, it’s not mistrusted, let’s leave this aside for a moment and take issue with the fact that for many, especially in the enterprise space, the cloud is not fully tested and as such, is not fully trusted. This is particularly the case when it comes to mission-critical tasks—and that paradigm doesn’t appear to be shifting in the cloud’s favor anytime soon due to security concerns.
It is only with the big nasty “S” word (security, of course) rears its ugly head that this risk-benefit argument loses its steam, but for many who do require supercomputing capacity, this is of the utmost importance. Bio-IT companies, financial services, manufacturing firms who protect their designs with a fervor roughly parallel to that which they’d implement on their own children—all of these markets are forced to weigh this cost and in some cases, are required to pay hefty fines for lapses in security or compliance obligations.
Jeffrey Clark stated that from a cost perspective, “Giving the cloud a test run or two may cost some money, but it may also offer significant returns if successful; that is, if cloud-based supercomputing cloud be of potential benefit for a particular company, that company has little to lose by trying.” Again, there could be something big to lose—and no one wants to take that chance. This is why analyzing the experiences from early adopters is so important at this stage. While the cloud is helping small and mid-sized business sail without question, when we venture into the realm of HPC, the situation is completely different. What might be a mild concern for a SMB is magnified exponentially for large-scale computing users.
Full story at Datacenter Journal
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.