Leveraging Simulation to Patch a Clinical Trial Broken by COVID-19

SPRING 2020, THE EVIDENCE FORUM, WHITE PAPER

J. Jaime Caro, MDCM, FRCPC, FACP
Chief Scientist
Evidera

Introduction

The basic structure of a randomized comparative clinical trial is quite simple. A candidate population that meets the admissibility criteria is recruited. Each candidate who consents to participate is randomly allocated to an experimental treatment or to one or more “controls.” The participants are followed and data are collected until a pre-specified ending criterion is met. To ensure the integrity and usefulness of the trial, it is important that there be sufficient numbers of participants, they be followed as specified without losses, and all desired data be collected. Failure to do so increasingly threatens the informativeness of the trial, and if the losses are biased in some way, the validity of the trial can also be jeopardized. Medical research has become quite adept at meeting the design and operational challenges posed, but the COVID-19 pandemic has inflicted unexpected damages to ongoing trials. The enormous investments involved and the substantial adverse consequences of failing to gain the desired information from these trials put enormous pressure on our field to find ways of patching the broken trials. In this brief paper, we provide one novel solution to these problems: using simulation to attempt to rescue these studies and make up for the lost data.

    What is the Problem?

    COVID-19 impairs the process of randomizing people, following them over time, and collecting their data by making it more difficult for participants to carry on with their study visits, increasing reluctance of study personnel to carry out required activities, and in the extreme, removing people altogether if they become ill. For trials that are still recruiting participants, their identification and enrollment may be considerably impaired. Thus, trials are suffering from patients who leave the study early, cannot complete their scheduled data collection, or have significant missing data; some trials are even losing power to determine the planned endpoints because there are fewer participants to randomize.

      One Powerful Solution

      The essence of the solution comes from understanding the purpose of the comparator arm in a clinical trial. The idea is that we want to compare what happens to participants in the experimental arm to what would have happened if they had been left alone without receiving the experimental intervention. The control group fulfills this purpose – it provides information on what happens to similar people who do not receive the intervention (but are otherwise observed in a similar way).

      In the early days of clinical research, there was a need to actively collect these comparator data because that information was scarce or non-existent. Now, after many decades of research, a great deal is known about the course of most diseases given the standard of care interventions, and data continue to accumulate. While not as good as data obtained optimally in a contemporaneous clinical trial, the existing data can be leveraged to respond to the question: what would have happened to these patients if they had completed the trial on the given comparator arm?

      One way to accomplish this, which has been gaining credibility, is to find a suitable dataset, identify patients who would have been admissible to the trial in question, extract their data, and analyze their recorded outcomes. Various statistical techniques are then used to improve the likelihood that the selected patients do indeed reflect those who would be in the trial’s control arm. This method, dubbed a “synthetic control arm,” increasingly leverages the real-world data collected for other purposes. Regardless of the data employed, however, these types of studies are restricted to the “matching” patients found in the dataset and are limited in terms of controlling for the differences between the dataset and the trial.

      A novel alternative that can overcome these limitations is to use a simulator to recreate the missing information.

        What is a Simulator?

        All of us, particularly younger generations, are very familiar with simulators, even if not specifically with a disease simulator. Many of the most popular video games, for example, are simulators. In the context of the problems experienced by Boeing, we have heard much about flight simulators and their use in training pilots. Even in medicine, much of the “hands-on” training has been shifted from having students and residents practice directly on patients to “dummies” that don’t feel pain and can be reused as many times as necessary.

        These are all physical simulators – they try to replicate physical environments, even if they are imaginary ones as in the video games. To patch the broken trials, however, we need something a bit different – more like the weather simulators that predict the pathways a hurricane may take. These are mathematical models that compute the possible trajectories, along with their likelihoods. Although they make predictions about a natural phenomenon, they are not physical simulators – they do not create representations of the ocean, the shoreline, and so on, but rather use a large number of linked equations that can take inputs like water temperature, barometric pressure and so on to derive predictions of the trajectory of the hurricane.

          Our disease simulators, likewise, are mathematical structures that provide detailed predictions of the disease trajectories – of what will happen – for a particular patient profile under a given set of circumstances, including standard interventions, and how these change over time. Interlinked equations are at the core of the simulation and these are implemented in a framework that enables modifying the inputs and exploring their effects. With this tool, it is possible to simulate what would have happened to patients in the control arms had they completed the trial. In fact, real patients enrolling in the trial can now be allocated preferentially to the experimental arm maximizing the information to be obtained there (where simulation cannot reach), and the now “missing” control patients can be generated via simulation, possibly even going to no further controls, nearly a single arm study.

          With this tool, it is possible to simulate what would have happened to patients
          in the control arms had they completed the trial.

          How is the Simulator Constructed?

          The key to building a good simulator is detailed under­standing of the disease trajectory and its predictors. This requires expert clinical knowledge, a good grasp of the literature, but most important, obtaining sufficient data to develop the core equations for that disease. The data sources can be many and varied, coming from depositories of real-world evidence, previous clinical trials in the therapeutic area, registries and other cohort studies, and meta-analyses. There is no reason to limit the simulator to any particular type of data or single source – the more data the better.

          These data are used to develop the predictive equations that capture the disease trajectory. These are very much parametric equations that try to describe what is happening over time in relation to the patient profiles, environment, behaviors, interventions, and anything else that may be predictive. The equations can be quite complex, and their development requires expert statisticians experienced in this type of work. It is very important to avoid simplification for its own sake.

          Once the equations have been developed, they are deployed in a framework that integrates them into a calculational structure with modifiable inputs and reporting of the required outputs. A very flexible and easy to work with approach is Discretely Integrated Condition Event (DICE) simulation. In such a model, the things that can happen are represented as tabulated Events and all the information, including the equations, is stored in Conditions.1-3 Instructional materials and examples can be downloaded from https://www.evidera.com/dice.

            The COVID-19 crisis is forcing us to consider novel approaches to fix unexpected problems that have few other solutions.

            The resulting disease simulator must be extensively validated. Just like the hurricane predictors, or any weather model, the simulator is valuable only if it makes reasonably accurate predictions. With the weather, the forecasts are validated soon enough, but with disease simulators it is necessary to actively validate their predictions because often they will not be enacted in reality. This is, of course, especially true when patching a broken trial as the whole point is to recreate what would have happened but no longer will. The disease simulators are validated by seeking other studies and data sets and attempting to predict, on the basis only of the starting circumstance, what happened. Often this is done employing the same data that were used to develop the core equations, but this only provides partial, dependent validation. Ideally, the validation extends to studies that were not used in constructing the simulator. As the simulator’s predictions may drive serious, expensive decisions, it is crucial to ensure that it is predicting accurately.

            How is the Simulator Used?
            Once the simulator is validated, we can start repairing the gaps in the broken trial. Patients with missing data or shortened follow-up in the control arm and those who are still to be randomized can now be recreated in the simulator, rescuing much of the sample size and enabling conclusions to be drawn from the broken trial. To do this, the user does not need to be a simulation expert as the simulator is implemented in Microsoft Excel®. What is required is a good understanding of the disease, the broken trial, the product indication, and the patient profiles enrolled in the trial. The user works with the simulator through a graphical interface where they can enter their various inputs, specify scenarios, and incorporate uncertainty. The interface sends the entries to the DICE engine where all the logic, equations, and analytics take place. After executing a simulation, the results are output to the interface. There is no need for the user to understand the workings of the simulator, but the models are very transparent and can easily be examined if there is interest.

            Although fixing broken trials has not been a major objective of disease simulators (mainly because our field tries very hard not to have broken trials), simulation has been used to create simulated control arms for single arm studies and the results have been looked on favorably by regulatory agencies.4 In addition, these simulators are being used to design new trials and extend the results to other populations or contexts.

              Advantages and Limitations

              Compared to synthetic control arms, the simulator can leverage data from many sources, incorporating as many predictors of the trajectories as possible. These are not only patient characteristics, but also features of the study protocol, context, environment, country, etc. Aspects particular to the broken trial, such as discontinuation, visits, and testing frequency can be simulated. This frees up the trial to redirect its efforts to the experimental arm and maximize power.

              Although the focus here is on the clinical trial primary endpoint, and possibly some of the secondary ones, the simulator can produce any number of outputs including other health aspects, economic predictions, quality of life outcomes, and so on, and over longer periods than may be necessary for the trial itself.

              The simulator is entirely dependent on the quality of the linked equations, and, thus, on the data used to develop them. If those data are very messy and incomplete, then the simulator will not yield good predictions. Beyond the data, the construction of the simulator itself is straightforward and can happen very quickly.

                One aspect that can be difficult to incorporate into a simulator is the placebo effect and the related Hawthorne effects. Humans respond differently when they know they are under observation or they think they are receiving effective treatment. These responses are unlikely to be reflected in data collected routinely for other purposes but can be incorporated using information from previous trials. In any case, the validation against other trials can assess the extent to which unexpected effects occur in prospective studies and whether the simulator is capturing these.

                While the simulator can patch the control arms, it is not able to simulate the experimental arm. That is precisely the knowledge the trial is supposed to generate and true in silico testing of products remains a remote hope.

                There is also a psychological challenge to deploying disease simulators. While other fields have been doing it for decades, our field has been very slow and late to adopt simulation. For many people, there is a reluctance to jump into a new method; they worry that time and money invested in this approach may be wasted. Will anybody buy it? Will anybody believe it? The COVID-19 crisis, however, is forcing us to consider novel approaches to fix unexpected problems that have few other solutions.

                  Conclusion

                  Clearly, the COVID-19 era is threatening the conduct and completion of clinical trials. Simulation is a very powerful tool that can help overcome these difficulties – it helps fix the broken studies. Judicious leveraging of these novel approaches can answer the question: what do we predict would have happened to these patients if they had completed the standard of care or comparator arm? We need to accelerate the deployment of these unique – and possibly industry changing – strategies.

                    References

                    1. Caro JJ. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics. Pharmacoeconomics. 2016;34(7):665-72. doi: 10.1007/s40273-016-0394-z.
                    2. Caro JJ, Moller J. Adding Events to a Markov Model Using DICE Simulation. Med Decis Making. 2018;38(2):235-245. doi: 10.1177/0272989X17715636.
                    3. Moller J, Davis S, Stevenson M, Caro JJ. Validation of a DICE Simulation Against a Discrete Event Simulation Implemented Entirely in Code. Pharmacoeconomics. 2017;35(10):1103-1109. doi: 10.1007/s40273-017-0534-0.
                    4. Caro JJ, Ishak KJ. No Head-to-Head Trial? Simulate the Missing Arms. Pharmacoeconomics. 2010; 28(10):957-967.