HPC is a Workload; Cloud is an Architecture: A Chat With Intel's Boyd Davis
Post Date: November 03, 2010 @ 12:31 AM, Pacific Daylight Time
Blog: Behind the Cloud
We caught up with Boyd Davis for this video interview from ISC Cloud in Frankfurt, Germany, Vice President at Intel's Architecture Group to talk about how, when considering HPC in the cloud, one can view HPC as a workload and cloud computing as an architecture.
Last week at the ISC Cloud event in Frankfurt Germany some of the leading figures in the high-performance computing and cloud space met to discuss progress and current challenges as those in HPC consider the cloud for their applications. On site was Tom Tabor, publisher of HPCwire and HPC in the Cloud who took photos, a select few of which we'll share here.
Several universities have been launching continuing education programs on cloud and virtualization and Harvard is another to join those ranks. To learn more about what such a course involves, we spoke with the instructor, David Malan about EC2 hands-on use and class goals.
House of the Rising Technology: Disruptive Innovations at SC10
Post Date: October 21, 2010 @ 11:53 AM, Pacific Daylight Time
Blog: Behind the Cloud
At SC10, the Disruptive Technologies showcase will examine new computing architectures and interfaces that will significantly impact the high-performance computing field over the next five to 15 years, providing an early glimpse into the future of high-performance computing.
While there are several upcoming events that address both HPC and cloud, there is a notable lack of case studies and balanced, non-vendor-driven discussions about practical implementations--a far more useful topic for those new to the topic or considering the move.
A survey-based report from The Linux Foundation and Yeoman Technology Group that was released looked at the growth of Linux in the large-scale enterprise space. It also shed some light on possible trends related to the role of cloud computing--and the role of Linux as an OS in a virtualized environment.
Robert Graybill on Integrating HPC's "Missing Middle"
Post Date: October 06, 2010 @ 1:42 AM, Pacific Daylight Time
Blog: Behind the Cloud
Robert Graybill, CEO and President of Nimbis Services in a video interview disucssing core issues related to the missing middle in HPC, particularly in the manufacturing context. Identifies what potential users require and how they can be granted access to much-needed resources to drive growth.
Virtualization and Performance: Brocade's Dr. Maria Iordache
Post Date: October 04, 2010 @ 12:53 PM, Pacific Daylight Time
Blog: Behind the Cloud
Dr. Maria Iordache, formerly of IBM and now with Brocade discusses the view that virtualization renders the true definition of HPC immobile since the critical "p" of performance is sacrificed by the latency-add that virtualized resources bring to bear.
During the HPC 360 event last week I caught up with Al Stutz from Avatec who disucssed using geographically distributed Infiniband clusters in a common Infiniband mesh, which will be demonstrated at SC10.
During last week's HPC 360 event, Matt Dunbar, Chief Software Architect for SIMULIA discussed the challenges of running out of capacity on in-house systems and what evaluation measures are required when considering deploying on-demand resources for post-processing.
Earl J. Dodd, President of Ideas And Machines, Inc. and i3D Inc.
Independent HPC consultant for cluster, grid, and cloud computing, and for data and compute-intensive applications, and General Chair of the ISC Cloud Conference.
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group. He is directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives.
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive.
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates.
Sue Korn is a Senior Analyst at Intersect360 Research specializing in Edge HPC applications, and a 20-year veteran of the Financial Services Industry. In her role at Intersect360 Research, Korn spearheads the company's analysis of the drivers and barriers of HPC adoption in business environments and the growing role of Edge HPC applications.
Scott Clark has been an infrastructure solution provider in the EDA/Semiconductor industry for almost 20 years.
Ignacio M. Llorente, Ph.D in Computer Science (UCM) and Executive MBA (IE Business School), is a Full Professor in Computer Architecture and Technology, and the Head of the Distributed Systems Architecture Research Group at Complutense University of Madrid.
Joshua Geist is the founder and CEO of Geminare Incorporated, an innovator in cloud-based enablement technologies for the Recovery as a Service market. Combining a degree in Physics with over 20 years of technology experience, Joshua's passion lies in solving technology challenges for the mid-sized business market.
Miha Ahronovitz specializes in cloud software, products and business models and led product and business strategy for Sun Microsystem’s HPCGrid and Cloud division. Following Sun’s merger, Miha is now the Principal of Ahrono Associates.
Edward J. Lucente is V.P. of Business Development at Data Center Rebates, Inc., an IT efficiency consultancy based in Carlsbad, CA, whose professional services focus on data center energy efficiency (DCEE), leasing integrated with technology refreshes, and negotiation of IT energy rebates. (Ed is a rabid Red Sox fan also.)
Craig Lund is a consultant focused on specialized markets for High Performance Computing. He is best known from his many years as CTO of Mercury Computer Systems.
Jake is a software executive, writer and blogger. Based in Raleigh, North Carolina, he is currently the chief marketing officer for rPath. Feel free to contact Jake via email at firstname.lastname@example.org
Tom is the publisher of HPC in the Cloud. He has over 30 years of experience in business-to-business publishing, with the last 22 years focused primarily on High Productivity Computing (HPC) technologies.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.