By: Mark E. Newton

The MHRA Inspectorate’s first blog on data integrity (Ref 1) has a golden nugget that everyone should appreciate:

“.. manufacturers typically focus data integrity and validation resources on large and complex computerised systems, while paying less attention to other systems with apparent lower complexity. Whereas simple machines may only require calibration, the data integrity risk associated with systems linked to user configurable software (e.g. PLC-linked production equipment and infra-red / UV spectrophotometers) can be significant, especially where the output can be influenced (modified or discarded) by the user.”

Data integrity is often lost at the point of data collection—the source.  And once lost, it is gone –no ELN, LIMS or ERP controls can get it back.  Get this point:  improving data integrity is largely a “bottom up” activity.  To illustrate, let’s explore two (of many) ways that integrity can be lost at the collection point, even with well-validated systems:

(1)  Some instruments create working files in a local directory. Unfortunately, they also store test result files in the same directory.  As a result, users must update access to that directory or the equipment will not function.  The side effect of this design—the user can delete, move, rename or copy the test result files outside the application with no record of the action;

(2)  Standalone instruments are commonly connected to ELN, Lab execution or LIMS systems for direct transfer and parsing of test result files. This is great, but has a gap — users are not forced to transfer ALL test results. It is possible to perform an assay several times, pick a favorite and transfer only that one.  Only by reviewing data files at the source and comparing them to the receiving system could this practice be discovered.  Standalone instruments require routine run management to verify that all data generated by the instrument are part of a testing (and review) record.

The combination of (1) and (2) is a “perfect storm” for data corruption. The user can run several product assays, pick and forward one, then delete all additional assays not forwarded.  If this happened in your company–or your CRO, CMO, or CLO– how would you detect it?

These points alone illustrate why it is so critical to do a review of instrument systems and their configurations.  You must know what data is managed, where it is stored, and what security vulnerabilities result from the system design or configuration.  This information empowers you to apply controls that mitigate your risks to reasonable or acceptable levels.

Organizational data integrity is only possible with data integrity at the source, and that requires an understanding of your business process and its critical data.

Ref 1 – “Good Manufacturing Practice (GMP) data integrity: a new look at an old topic, part 1” by David Churchward. MHRA Inspectorate Blog:  https://mhrainspectorate.blog.gov.uk/2015/06/25/good-manufacturing-practice-gmp-data-integrity-a-new-look-at-an-old-topic-part-1/

Are you looking for a hands-on approach for identifying, mitigating, and remediating potential causes of breaches in data integrity?

Don’t miss this special half-day Data Integrity Workshop focused on key data integrity issues facing the pharmaceutical product lifecycle. This interactive workshop will identify important regulatory issues impacting data integrity, answer key questions surrounding current expectations, and provide an overview of the Application Integrity Policy.  Learn more about the Data Integrity Workshop and how to register.

RELATED POSTS