October 03, 2012
Oct. 3 — Data processing, data management and data storage provider, OCF plc, has massively expanded its HPC On-Demand service, enCORE. It can now deliver up to 8,000 cores of vital processing power to a wide range of business sectors in the UK and beyond.
The expansion of the service follows a newly signed agreement between OCF and the Science and Technology Facilities Council's (STFC) Hartree Centre. The Centre is a research collaboratory in association with IBM launched in 2012, which was formed as a result of a £37.5 million investment by the UK government.
The enCORE service will use additional processing power from The Hartree Centre's "Blue Wonder," a new IBM System x iDataPlex cluster comprising 8,192 Intel Xeon E5-2670 processor cores. Tests show that the Blue Wonder iDataPlex cluster can achieve 206.3 teraFLOPS. Its 48 TB shared memory capacity also makes it the largest shared memory cluster in the UK. BlueWonder was installed and configured by OCF in partnership with IBM.
First launched in November 2010, the enCORE service was the first in the UK to operate with a commercial organisation harnessing available processing power from academic and research-based high performance server clusters. The service has since been used successfully by firms such as Engys, Actiflow, CVIS, BHR and Renuda to meet on-going and temporary "burst" requirements for additional processing power.
"enCORE has enabled our clients to re-evaluate their HPC computing strategy. Ease of use, exceptional technical support, scalability and price / performance have been key factors in them deciding to use an off-premise HPC cluster. We've seen this service directly lead to firms winning contracts they could not otherwise have delivered," says Jerry Dixon, HPC on Demand business development manager, OCF plc. "Our expansion of the enCORE service will now enable larger businesses with significant and complex HPC workloads to utilise this flexible facility, and to deliver tangible business benefits."
Dr David Kelsall, senior consultant at fluid engineering consultancy, BHR Group comments: "OCF's enCORE service has enabled us to cope with peaks in demand for capacity when we are undertaking simultaneous consultancy and research projects. It is a very easy service to use, with an uncomplicated and simple structure that doesn't require any previous HPC knowledge to operate."
He adds: "The enCORE service allows us to tackle much larger calculations, up to four times greater than we can manage in-house. As a result, we aren't constrained from taking on more projects than our in-house computing resources allow. It also helps to free up the time of our engineers, who can work on other projects with the extra capacity provided by OCF and can expect a quicker turnaround of analyses results with the enCORE service. We derive a lot of comfort that OCF is providing a UK based 'cloud service', so we know exactly where our data is processed. The enCORE service is cost effective as we can tap into it when required, without the need to tie up a lot of capital for the occasional use of HPC resources."
Zvi Tannenbaum, owner of independent software vendor, Advanced Cluster Systems, says, "OCF's enCORE HPC On-Demand service has acted as a platform enabling us to test and refine our SET (Supercomputing Engine Technology). SET, an MPI (Message Passing Interface) based library, enables mainstream software writers to quickly and cheaply apply MPI parallelization to their software, turning it to a high performance version without code changes, making it suitable for running efficiently on multicore machines and clusters such as OCF's. SET is designed to take the complexity out of MPI parallel programming, making it more readily available to SMEs or other organisations with no in-depth knowledge of MPI.
MPI is the de facto standard for communication among processes in supercomputing centres, and is fully supported by OCF. Now fully tested and operational, OCF is the first HPC On-Demand service provider to offer SET run-time environment in the UK. OCF has demonstrated exceptional technical support and responsiveness during the installation process and continues to deliver professional and timely advice and support."
The enCORE service
· As part of the HPC on Demand service, OCF is responsible for pre-sales qualification with customers to discover required volumes of processing power and benchmarks to demonstrate that enCORE can run specific HPC applications efficiently.
· The service is scalable and suitable for SMEs through to major corporate and academic / research users.
· By working with OCF, customers receive an SLA-driven service, commercial terms and commercial account management, strong technical resource, and first class technical support and assistance for maximum efficiency.
· OCF also holds a number of pre-installed and optimised application codes ready for use with the service; it can work with Independent Software Vendors to get application licensing for the term of a contract with customers, or it can potentially access the end users licences directly, thus ensuring adherence to the ISV's licensing terms.
· enCORE uses the latest Intel and NVIDIA GPU processor hardware for maximum performance.
· Data transfer between the customer and enCORE is handled by enCORE's simple secure web interface or, in the case of extremely large files, by secure shuttle service.
· Contracts with OCF are flexible, and use of enCORE involves a small annual subscription plus a cost per core hour used; interested parties should contact OCF for pricing.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 10, 2013 |
Australian visual effects company, Animal Logic, is considering a move to the public cloud.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.