Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).Read more...
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
The private industry least likely to adopt public cloud services for data storage are financial institutions. Holding the most sensitive and heavily-regulated of data types, personal financial information, banks and similar institutions are mostly moving towards private cloud services – and doing so at great cost.
In this week's hand-picked assortment, researchers explore the path to more energy-efficient cloud datacenters, investigate new frameworks and runtime environments that are compatible with Windows Azure, and design a uniﬁed programming model for diverse data-intensive cloud computing paradigms.
At the 2013 Open Fabrics International Developer Workshop, in Monterey, California, VMware's in-house HPC expert Josh Simons delivered a presentation on the Software-Defined Datacenter. At its essence, a software-defined datacenter is a prescriptive model for bringing the benefits of virtualization to rest of the datacenter.
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
Medical imaging data explodes into the cloud - 6 billion images alone in Dell's Archive.
After a lengthy incubation phase, Microsoft is finally ready to release its IaaS product into the wild. AWS, look out.
For a growing number of non-traditional HPC workloads, the cloud is the place to be.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.
Complimentary Webcast! Break free from the database vendors that force you to keep investing in additional skills and hardware to accommodate the inefficiencies of their software. Learn how you can achieve higher DBA efficiency and give your DBAs more time to focus on strategic projects and add more value to your business. Join us to hear best practices and client experiences on reducing both the risk and cost associated with growing Data Center complexity.
NFS has been the standard protocol for NAS systems since the 1980s. However, with the explosive growth of Linux clusters running demanding technical computing applications, NFS is no longer sufficient for these big data workloads. After years of development effort, driven by Panasas and others, pNFS is now just around the corner and promises to dramatically improve Linux client I/O performance thanks to its parallel architecture. Watch the on-demand webinar – “pNFS: Are We There Yet?”
Independent HPC consultant for cluster, grid, and cloud computing, and for data and compute-intensive applications, and General Chair of the ISC Cloud Conference.
More > >
Jose Luis Vazquez-Poletti
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Complutense University of Madrid (Spain), and a Cloud Computing Researcher at the Distributed Systems Architecture Research Group. He is directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives. More > >
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive. More > >
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates. More > >
Principle Investigator and Director for the National Nuclear Security Administration and DOE sponsored Center for Disaster Recovery. More > >