June 16, 2011
This week at the AMD Fusion Developer Summit in Bellevue, Wash., I sat down for a chat about high performance computing and clouds with Margaret Lewis from the company's server and software division. While she was careful to point out that cloud computing is nothing new, especially for high performance computing where the concept sprouted from so long ago, she did suggest that the technologies have matured to a point where the community that invented clouds is now able to better exploit them.
We avoided conversations about definitions and generalities wrapped in marketing, since after all, one tends to get enough of this when it comes to conversations about cloud computing. Instead, we cut right to the chase to find out what makes AMD (or any other chipmaker for that matter) invested in clouds — and what AMD is doing to see that clouds are suitable for HPC.
One of the more salient features of our conversation this week was about the role of their Opteron processors. As you'll hear below in the clip from our interview, AMD's strategy for clouds is to provide more real cores that can handle more virtual machines, transactions and computation — all within a defined power envelope that emphasized memory bandwidth, low memory latency and large memory footprints cost effectively.
Pitches aside, AMD has taken an interesting approach to creating an optimized product for cloud computing datacenters. As she says above after being asked what differentiates AMD, they took a different path, avoiding hyperthreading technology since one ends up sharing with the logical processors which can add bottlenecks. She said their focus is on developing real cores, not logical cores. This is apparent with their bulldozer architecture set to roll out in a few months with its core-boosting redesign that is set to emphasize effectiveness and efficiency.
One of the sections of our discussion that didn't make it to the final cut was how AMD worked with Microsoft as it built its Azure technology. Lewis said that at the time, AMD was selected because they were the only ones offering the virtualization technology needed for building a cluster to run a complex software stack (database, multiple applications, rich middleware, etc.). This technology called RVI offered Microsoft a way to map memory to allow them to handle virtual memory of a virtual machine to the virtual memory of the hypervisor (keeping in mind the hypervisor needs to keep all the virtual memory of all virtual machines as well as its own memory).
She said that this workaround allowed some of the mapping to be done at the hardware level, which relieved some of the complexity for some of the virtualization software, freeing up capability to run these ultra-complex stacks. She said that Microsoft is still using some of this technology to run some of its own cloud-based apps. As she said, “this is a good success story for us because it shows what you can do with off-the-shelf commercial technology after reworking it to fit into today's cloud environments.”
For more about AMD’s general high performance computing roadmap and more HPC-specific questions, see a companion feature at HPCwire featuring Lewis.
Jun 19, 2013 |
Ruan Pethiyagoda, Cameron Boehmer, John S. Dvorak, and Tim Sze, trained at San Francisco’s Hack Reactor, an institute designed for intense fast paced learning of programming, put together a program based on the N-Queens algorithm designed by the University of Cambridge’s Martin Richards, and modified it to run in parallel across multiple machines.
Jun 17, 2013 |
With that in mind, Datapipe hopes to establish themselves as a green-savvy HPC cloud provider with their recently announced Stratosphere platform. Datapipe markets Stratosphere as a green HPC cloud service and in doing so partnering with Verne Global and their Icelandic datacenter, which is known for its propensity in green computing.
Jun 12, 2013 |
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud.
Jun 06, 2013 |
The San Diego Supercomputer Center launched a public cloud system for universities in the area designed specifically to run on commodity hardware with high performance solid-state drives. The center, which currently holds 5.5 PB of raw storage, is open to educational and research users in the University of California.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.