Success Comes with the Right Blend of Scientific Expertise
By: Dr Nimita Limaye, CEO, Nymro Clinical Consulting Services, Cyntegrity Project Lead KRI Wiki
Risk-based Monitoring (RBM) still exists like a troubled dream for a large portion of the industry today; something that one aspires for but is not easy to achieve. RBM implementation is a complex process and requires expertise. Challenges vary in nature from those which are more business oriented such as vendor selection and the demonstration of ROI to operational challenges related to the site, the on-site monitor and the central monitor, technology challenges related to the integration of data from multiple sources and providing customized, scalable and customizable views, as well as challenges related to the core science of RBM – understanding how to identify and define Key Risk Indicators (KRIs) and the statistical science behind their derivations.
Survey (N=46): KRIs – The Three Biggest Challenges; A = Specification of a KRI, B = Selecting which one to use, C = Defining the right number of KRIs in a study, D = Defining the right actions, E = Interpretation of a KRI, F = Finding the right calculation method, G = Connection to risk, H = KRIs firing too often – leading to too many queries, I = None of them / I have not used KRIs yet.
Challenge Number 1: Finding the Right Method for the Calculation of KRIs
Not surprisingly, in a poll held during a webinar conducted recently on ‘Best Practices in Defining KRIs for Clinical Trials’ by Dr Nimita Limaye (CEO, Nymro Clinical Consulting Services) and Dr Artem Andrianov (CEO, Cyntegrity), it was found that the most distinct challenge reported by 22% of the participants was related to finding the right method for the calculation of KRIs. The right combination of clinical and statistical expertise is required to be able to address this point and the latter is often lacking within organizations.
Challenge Number 2: Selecting which KRIs to Use
Running neck to neck, coming in next, were the challenges related to the selection and the interpretation of KRIs (12-14%). The right number of KRIs is often an elusive number. If you go overboard, then the focus and hence the efficiencies that RBM is expected to bring in would be lost. On the other hand, missing out on critical KRIs could be significantly damaging, as the corresponding data points may never be assessed. ICH E6 (R2) has stressed upon the importance of focusing upon systematic errors. As per Pareto’s Law, 20% of systematic errors could result in 80% of defects. By addressing those 20% errors that consistently recur, one can have a significantly higher impact on the overall quality of the data.
- EarlyBird® Platform displaying a site’s performance; various colours indicate the severity level of an error, longer intervals refer to systematic errors versus short, intermittent intervals refer to random errors.
- The Type of Errors and Their Tolerance Levels Distribution
- The Process of Reduction of Systematic Errors
Challenge Number 3: The Interpretation of KRIs
Finally, you have your list of KRIs in place, your triggers are firing and you are now reviewing a lot of interesting dashboards. A crucial piece over here is to know how to really interpret the same and thus how to correlate a firing KRI with actual risk impact. One needs to look at how the KRIs have been configured and which are the corresponding KRIs that need to be reviewed before interpreting the outcome and taking any action.
For example, if no SAEs are reported at a site, as compared to other sites in a study within the first month, a trigger may be expected to fire. Prior to taking any action, one needs to first check whether any subjects were recruited at the site at all and if so, how long ago were they recruited. If only one subject had been recruited, say three days before the trigger had fired; then it is but logical to expect that no SAEs may have been reported at the site in such a short duration (depending of course on the therapeutic area). So, this should not result in a query or action regarding SAEs. However, one may need to investigate on the other hand, as to why recruitment was delayed at the site.
Another example could be that of the consistent underreporting of adverse events at a site as compared to other sites. Should this be considered as a leading indicator of fraud at the site (someone trying to suppress the data) or a lagging indicator, indicative of training issues at the site?
The thorough documentation of the correct way to calculate and interpret KRIs is very important; else, it may result in different monitors interpreting the signals differently. Adequate training on how to create KRI specifications accurately is also very important and was also presented as a common industry challenge reported by approximately 10% of those attending the webinar.
Technology solutions which can capture KRI libraries, with supporting calculations and interpretations, will ensure consistency in approach and hence enhance quality. Cyntegrity has launched a path-breaking KRI Wiki initiative of industry experts, who will share their expert views to help build a KRI Wiki of specifications based on industry best practices; this which will be then made accessible to the industry as well.
A blend of the right scientific expertise, along with the right systems and tools, will convert what is but a pipe dream, into the successful implementation of RBM.
Next, of course, is the ability to define the right actions in case a trigger fires. These should be documented and captured within the system to avoid any ambiguity between different end users, such as on-site monitors or central monitors. Equally challenging is a scenario where the system does not recognize that a trigger that has fired earlier, had already been actioned, and keeps on firing without reason with each data load. Upset investigators and wasted resources are often the outcome.
With over 120 registrants and 46 attendees, our webinar on “KRIs in Clinical Trials – Best Practices” by Dr Nimita Limaye and Dr Artem Andrianov turned out to be an overwhelming success. We decided to schedule a second iteration with deeper insights on Wednesday April 25th 2018.
- Furay Fay M, Eberhart C, Hinkley T, Blanchford M and Stevens E. A Structured Approach to Implementing a Risk-based Monitoring Model for Trial Conduct. Applied Clinical Trials, Dec 2014. http://www.appliedclinicaltrialsonline.com/structured-approach-implementing-risk-based-monitoring-model-trial-conduct
- Alsumidaie M, Widler B, Schenk J, Schiemann P and Andrianov A. RBM Guidance: Ten burning questions about risk based study management. Applied Clinical Trials, Jan 2015. http://www.appliedclinicaltrialsonline.com/rbm-guidance-document-ten-burning-questions-about-risk-based-study-management
- Limaye N and Andrianov A. The Emergence of a Risk Monitor: Preparing for the Future. 2016. https://cyntegrity.com/emergence-of-risk-monitor/
- Integrated Addendum To ICH E6 (R1): Guideline For Good Clinical Practice E6 (R2), Nov 2016.
- Zink R. Exploring the challenges, impacts and implications of risk-based monitoring. Invest. (Lond.), 2014; 4(9): 785–789. http://www.openaccessjournals.com/articles/exploring-the-challenges-impacts-and-implications-of-riskbased-monitoring.pdf