September 29, 2010
“The validation of any cloud based application involves additional considerations and risks that must be taken into account during the planning process. Any life science company looking to validate cloud based systems must adjust its system qualification process to properly prove to any auditor that the application in question is installed, operating correctly while also meeting the users requirements.”
Previous posts have provided a generic overview of what it takes to validate applications to meet FDA 21 CFR Part 11 guidelines. These guidelines are put forth by the FDA and apply to any system maintaining electronic records dealing with the research, testing, manufacturing and distribution of a medical device or drug.
I mentioned in a prior post three of the components that are part of the Part 11 validation process, the Installation Qualification (IQ), the Operational Qualification (OQ) and the Performance Qualification (PQ). There are several other critical documents that make up the overall validation package that would be reviewed by the FDA, they include:
- Validation plan: the document that describes the software validation strategy, scope, execution process, roles, responsibilities, and general acceptance criteria for each system being validated
- Functional Requirements: these are based on the user requirements and define the processes and activities to be supported by the system
- Traceability Matrix: used to cross reference the functional requirements to actual validation test scripts to ensure that all user requirements are tested and have been proven to be fulfilled
- Installation Qualification: a set of test scripts that provide verification that the hardware and software are properly installed in the environment of intended operation
- Operational Qualification: verification that hardware and software are capable of consistently operating as expected and originally predicted
- Performance Qualification: proving that hardware and software can consistently perform within pre-defined or particular specifications and also meet the requirements as defined
- Validation Summary Report: a report summarizes the validation activities and results and provides the approving individuals with the software recommendation of acceptable or unacceptable for use
Every life science company must have SOP’s that spell out the validation process, roles, responsibilities, and what must be covered in the actual validation package itself. On top of that would be a number of associated procedures that provide additional guidance on such topics as change control, documentation practices, auditing, access controls and development methodologies. There can also be a validation master plan that describes what systems need to be validated within the company, for example a companies HR system may not need to be validated, and can provide templates and additional details on how the validation process must occur.
As you can see there is an entire eco-system of procedures, templates, and documents that make up the Part 11 validation process within any company that must adhere to FDA regulatory guidelines. For many large scale deployments the validation process can take nearly as much time and effort as the actual system implementation itself. The question is how does the advent of cloud computing impact all of these processes and protocols? What additional items will the validation process need to take into account when attempting to validate cloud based applications? I will provide some basic information below and then add more detail in a future blog entry.
- Validation Master Plan: the VMP will need to spell out what will or will not need to be added to the validation process due to the use of cloud computing. This includes all three facets of cloud (IaaS, PaaS, and SaaS). This can include such things as performing on-site audits, obtaining vendor employee training records, and auditing vendor change control, software development processes and security access procedures
- System Validation Plan: this will depend on what aspect of cloud is being used in the system, for internal applications being run on IaaS then the validation plan will need to address the additional requirements on validating the infrastructure, if PaaS is being leveraged the validation plan will need to prove the installation and operations of the PaaS environment and similarly for SaaS, the plan will need to define the validation process for an externally provisioned application
- Functional Requirements: there may need to be additional functional requirements as part of the cloud validation process for security, access, encryption, latency and response times
- Installation Qualification: as described in the validation plan, the type of cloud services being utilized will have a significant impact on what must be validated, for IaaS whether it is public or private cloud can be a big difference, for PaaS the validation of the supporting infrastructure can be provided by the vendor and evaluated for inclusion in the overall package. For SaaS it would be relatively the same
- Operational Qualification: once again depending on the type of cloud services being used the OQ, additional testing will need to address the operational effectiveness of the environment and take into account any changes required to qualify the cloud environment
- Performance Qualification: the PQ may incorporate additional scripts to prove that the system is meeting its defined user requirements, other test scripts may be necessary to prove such things as response times, security, or backup/recovery of cloud based applications
- Validation Summary Report: the thrust of the summary report will not change as its purpose is to collate all of the information generated during the validation process and to ensure that it is properly collated and can support the recommendation that the system meets its initial specifications/requirements and it ready to use
By taking into account the validation process changes required to support cloud applications, life science companies can provide the proper levels of assurance to the FDA that their applications running in the cloud meet the necessary Part 11 guidelines.
Posted by Bruce Maches - September 28, 2010 @ 10:37 PM, Pacific Daylight Time
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates.
No Recent Blog Comments
The ever-growing complexity of scientific and engineering problems continues to pose new computational challenges. Thus, we present a novel federation model that enables end-users with the ability to aggregate heterogeneous resource scale problems. The feasibility of this federation model has been proven, in the context of the UberCloud HPC Experiment, by gathering the most comprehensive information to date on the effects of pillars on microfluid channel flow.
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
Frank Ding, engineering analysis & technical computing manager at Simpson Strong-Tie, discussed the advantages of utilizing the cloud for occasional scientific computing, identified the obstacles to doing so, and proposed workarounds to some of those obstacles.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/02/2012 | AMD | Developers today are just beginning to explore the potential of heterogeneous computing, but the potential for this new paradigm is huge. This brief article reviews how the technology might impact a range of application development areas, including client experiences and cloud-based data management. As platforms like OpenCL continue to evolve, the benefits of heterogeneous computing will become even more accessible. Use this quick article to jump-start your own thinking on heterogeneous computing.