Case Study 1. Technical data errors while data capturing
Company N. working with Contract Research Organization C. has outsourced the technical part of the clinical logistics and data capturing for a clinical study. The main goal of Clinical study was to approve a new product aimed to a relief of asthma symptoms. The new medicine was designed basing on already existing analog, which should have longer working cycle in comparison to its ancestor. The clinical protocol was designed together with CRO and included several visits, ECG and Spirometry measurements per visit. The study began without serious troubles with a small delay of two weeks, because of the difficulties with the organization of trainings. Sites were selected from everywhere in the world and included several sites from China and Russia. In the very beginning, these sites showed some strange behavior, it seems that the study nurses did not really understand well, how to capture the qualitative data. The measurements from these sites did not correspond to the quality standards. Sometimes the data came broken through the hard restart of the system, partially it contained missing values and sometimes looked very suspicious, e.g. the Spirometry measurements contains many trials, which were switched off, as if somebody tries to meet ATS criteria (e.g. Repeatability) and produce the adequate measurement without repetition and enough patient effort. These sites delivered quite a good deal of the measurements and at the end produced almost 15% of all measurements of the clinical trials.
The broken data was rapidly fixed by an Organization, which was responsible for the data capture, but the missing elements remained as defects. The study, which cost one-third of billion dollar, was rejected by FDA due to p-value was less than a certain significance level (0.5).
How could this risk be identified earlier?
The “blurriness” and manipulative character of captured data contributed to the failure of the whole study, the p-value was on the limit of the acceptance criteria.
The technical defects could be identified in the beginning of the study through the simple statistical validity check: check the validity of outliers and missing data. The number of missing values per site could produce a rapid picture of what is happening there. The sites, which produce blurry data could have got a repetition of the training and more intensive monitoring. Several sites, which manipulated the Spirometry measurements must have gotten penalties.
Case Study 2. Technical Error – Data Base Timeouts -> Mixing of the Patient information
The Clinical Study by pharmaceutical company P. was a real monster in respect of the clinical data captured, it contained ca. 40 thousand of patients worldwide and the organization, which was responsible for data capture was poorly prepared for such challenge. The database server was running on its limits importing the data and delivering the data for reports, web portal and other services. The service of the review of the clinical data was configured to review ECG and Spirometry data. In Spirometry measurements a review of the best test was undertaken, for this, each measurement must be opened in a special program, reviewed by a specialist and saved again into the database. During this procedure database server produced from time to time out due to the high load. The review program is not expected such timeouts and contained bug of taking the context of the next measurement in the list and save the reviewed parameter to the new context, i.e. new patient. The mixed parameters from different patients were saved in the database. As it was identified, to fix these data problems would be high costs, therefore the data was exported as it was.
The Data Defects were not noticed by review of Pharma company, although if this problem would have been identified, this would result in the pushback of the application and probably of repetition of the clinical trial.
How could this risk be identified earlier?
The early statistical identification: compare the dispersion of parameters from the row of visits could easily identify that something is going wrong. Basing on early data defects detection the bug could have found early in the study not affecting seriously the clinical data and not introducing the new risk.
Case Study 3. Technical Error – Export Error
The clinical trial contained many patients and was an innovative drug research for the company S. The treatment effect was recognizable by comparison of the randomized groups of patients and was very promising for the company to get an FDA approval already in the next year. The innovatory approach was of the high interest in the scientific society and company prepared several publications already about the new molecule, which bound with target protein very well and produced a well recognizable curing effect on a patient. When all documents were submitted to FDA, after the data check an astonishing deficit in clinical data was recognized. About 40% of clinical data was missing in the final export.
The company has got a major finding by FDA audit concerning this clinical study and had to push back all publications made in the recent years. The error in Export was corrected and the data was exported once again. The pharmaceutical company had got major costs and loss in reputation, what echoed several years on the market. The company, which was responsible for the data capture and export, paid penalties.
What could be done to minimize & identify the risk?
The risk of export was easy to prevent. The regular export information, which is available after a month of the clinical trial should have been monitored from the integrity point of view. How many patients are enrolled, how many measurements are expected? The number of missing values should not extend certain percentage.