November 02, 2011
Earlier today AMD met with its closest partners to discuss the future of supercomputing and the challenges associated with exascale computing.
Gathered at the Four Seasons Hotel in San Francisco, members of the high-ranking industry panel, which included executives from AMD and Cray, took turns pointing out the perceived barriers to exascale. The usual suspects emerged: power requirements, parallel programming obstacles, hardware failure rates, and so forth.
The group also weighed in on the subject of HPC-as-a-Service, leveraging the cloud to run compute-intensive applications, as reported in a news item from V3.co.uk.
According to Chuck Moore, AMD corporate fellow and technology group chief technology officer, the current cloud platform model is not a good match for next-generation supercomputing.
"You tend to write applications that spread out among many systems and come back with a result," Moore said of the cloud paradigm.
"While certain types of HPC spread work out, they do so with a very different set of latency and constraints and thinking, it is not like you can just pick up that application and run it on a cloud."
While it's true there are some limitations to the HPC cloud introduced by the extra software layer, there are also benefits of an on-demand model, namely scalability and only paying for what you need. Not all HPC applications will be suited for a distributed computing model, but many are.
As for that exascale timeframe, participants responses were mixed, with 2020 cited at the outside. Of course, by then, many of the barriers to HPC-as-a-Service that Moore mentions will be resolved.
Full story at V3.co.uk
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.