Musical Harmony in Clinical Trial Conduct
The discussion continues – our Chief Scientific Officer, Dr Johann Proeve’s response to an interesting but head-scratching topic: What Freddie Mercury Can Teach Us About Clinical Trials – Milton Packer shows how to harmonize two seemingly contradictory studies:
“The principles of harmony are especially applicable to the interpretation of clinical trials in medicine. When several clinical trials have evaluated the same intervention, their results are rarely perfectly aligned. Often, certain findings in one trial are entirely in conflict with observations of the same effect in other trials. When the results of trials collide, which trial is correct?”
Since 1962 manufacturers of drug products are required to establish a drug’s effectiveness by “substantial evidence”. Even without it being mandated, health authorities such as the FDA (USA) and EMA (European Union) typically expect pharma and biotech companies to conduct two well-controlled randomized (double blind) Phase 3 trials demonstrating efficacy and safety to subsequently obtain approval for a new compound. A New Drug Application (NDA) should include enough information – a “totality of evidence” approach – for the regulatory bodies to determine whether a new drug is safe and effective. Still today this requirement ensues discussions regarding the quantity and quality of the evidence needed to establish effectiveness.
The FDA considers the totality of evidence when evaluating the safety and effectiveness of new drugs. This phrase reflects the nature of drug development, with each successive piece of data building on prior data to provide the quantity and quality of evidence needed to adequately assess risks and benefits. Data from a study are always assessed within the context of other available data, never in isolation, and data from different studies are considered based on the reliability of a given study result.
3 Aspects of Clinical Trial Conduct that Affect the Totality of Evidence
The goal for a well–controlled experiment is for it to be repeated many times with the same or statistically similar results. However, two well-controlled clinical trials could still mean that slightly different patient populations are being enrolled, slightly different in- and exclusion criteria are being applied (since a company learned from the first of the trials that a certain parameter has been skewed), different countries are being included (e.g. since we need 300 patients from China and 200 from Russia to get approval in those countries), different visit intervals are being used, different methods and standards of care are being used in the various countries and so on.
Those above listed parameters can be controlled to a certain extent, e.g. Russia and China being involved in both pivotal trials. The permitted concomitant medication or the accepted underlying diseases are being controlled by a well thought through protocol and the methods used are being standardized as much as possible, e.g. using a central lab or a central ECG reading for both studies.
The above will already remove any artifacts that will make life difficult and ensure – at least to a certain extent – that one could expect one common outcome of the two studies.
There are, however, other factors, that could influence the outcome of such studies. Those factors could be the way how data are being captured in these trials. If, for example, in one of the trials adverse events are being reported by the patients in an unsolicited way (did you encounter any adverse events) versus a check list of already commonly surfacing adverse events encountered in e.g. the phase 2 trials or the first phase 3 trial, then the likelihood of many more adverse events being reported in the second trial is much higher and the two studies are hardly comparable.
Another example is the snapshot assessment of signs and symptoms of a disease, such as coughing, wheezing, dizziness, vertigo, runny nose, shortness of breath versus a comparison to baseline in the post-baseline visits. Both types of capturing the data will result in different outcomes of the drug, even in the same patient population, due to a recall bias.
A third example is that potentially eligible patients are being tested for their reaction to a study drug. In a screening phase, patients are given the drug to see whether they respond. If positive, they are being randomized, if negative, they cannot be enrolled into the study. This way – when using two different approaches for the patient selection – the results will be very different, which should not be a surprise.
There are many more examples of factors influencing the outcome of a trial. Therefore a sponsor company should thoroughly evaluate the way how to conduct these two studies in order to avoid any late stage surprises or – as mentioned in Milton Packer’s discussion – endless attempts to “slice and dice” the data until (hopefully) a ‘useful’ outcome can be derived from the data.
The above two categories are related to planning two clinical trials that are supposed to result in comparable outcomes. There is, however, a third category that should be considered. All the planning could have been done properly and still, the trials have a chance to result in different outcomes. That could be due to poor study conduct and lack of quality of the data.
Worst case is probably that the randomization did not work since the IVRS failed or the treating physicians randomized the patients based on the severity of the to be treated disease. If the trial is not double blind but just randomized, one frequently sees those biased randomizations with one treatment arm having fewer sick patients compared to the other treatment arm. Once such a skewed distribution happens in a trial it cannot be fixed anymore. Therefore, measures must be implemented to avoid such randomization biases.
A second example is an investigator driven assessment of e.g. the tumor size vs an independent, blinded adjudication committee tumor assessment. The latter will likely provide more unbiased results compared to the first example and thus, this must be controlled in both studies.
Thirdly, if adherence to the protocol in one study is stricter than in the other study, the outcome of both may be very different. This could be for example that in one study more patients are lost to follow up and / or withdrew their informed consent compared to the other study where the follow up and tracking down of the outcome – even for those lost to follow up patients – is performed more thoroughly.
As for the planning part also for the conduct part, there are many more examples of things that could be different between two trials.