September 21, 2012
Sept. 20 — Databarracks has been a managed Cloud Service Provider for almost a decade, implementing disaster environments for companies of all sizes at a global level.
Commenting on the Certification, Peter Groucutt, managing director at Databarracks, says: "Being certified to the Cloud Service Provider Code of Practice is further indication that Databarracks is committed to delivering the highest standards of service.
"As members of the Cloud Industry Forum it is doubly important for us to meet the requirements set by the organisation. The Certification is recognition that, as a responsible Cloud Service Provider offering robust cloud solutions, we satisfy the Code's stipulations for accountability, security and transparency."
The Cloud Industry Forum established the Cloud Service Provider Code of Practice to promote trust, security and transparency within the sector. Managed by CIF's independent certification partner, APM Group, the Certification process allows vendors to demonstrate to end users appropriate transparency about their business and services, commitment to operational capabilities and practices, and executive accountability for the declarations made to achieve Certification.
This in turn, offers security and assurance to the end user to make an informed and confident decision on the vendor's capabilities.
Andy Burton, chairman CIF, says: "CIF recognised and has been campaigning for some time for the introduction of industry-wide standardised definitions of cloud services to provide end users with much needed clarity and to drive cloud adoption. Therefore CIF established the only certifiable Code of Practice for Cloud Service Providers which enables them to stand out from the crowd in a way that provides a degree of assurance and enables rational comparison between vendors.
"We are delighted that Databarracks, not only as a member, but as an established Cloud Service Provider, has attained certification and joined the ranks by leading by example."
Groucutt adds: "CIF's Cloud Service Provider Code of Practice requires Databarracks to be transparent to customers and prospective customers about certain aspects of our services. We have adopted these elements and clarified them as part of our business offering going forward. Specific company information is also now available on our website alongside the CIF certified logo as a mark of our commitment to quality, rigour and transparency."
Richard Pharro, CEP at APMG Group, concludes: "An essential part of the value of CSP Code of Practice is the process. Becoming self-certified to the Code of Practice is rigorous and requires time and effort. That said, we aim to make the process as clear and transparent as possible and provide guidance and care to applicants where needed, during and after."
The CIF Certified logo will be visible on certified company's websites and hyperlinked to a set of public declarations that set out basic information that any potential customer may wish to know.
About the Cloud Industry Forum (CIF)
The Cloud Industry Forum (CIF) was established in direct response to the evolving supply models for the delivery of software and IT services that has expanded well beyond the traditional on-premise method to one that now embraces hosted and/or, pay-as-you-use Cloud solutions.
CIF's purpose is twofold: To drive a common and public level of transparency about the capability, substance and best practices of online Service Providers (SaaS, PaaS, IaaS, Web hosting providers, etc.) through a process of self-certification to a Code of Practice. Second, this Code of Practice, and the use of the related Certification Mark on participant's web sites, provides comfort and promotes trust to businesses and individuals wishing to leverage the commercial, financial and agile operations capabilities that the cloud-based and hosted solutions can offer. CIF is ensuring the integrity and governance of the self-certification process through regular random audits as well as investigating complaints from parties that challenge any specific participants self-certification status.
Databarracks is the largest UK provider of managed cloud backup and DR services. It also offers private cloud and co-location services from its Tier 4, ex-nuclear bunker data centres.
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.