News / TRIbune Newsletter

The Emergence of Artificial Intelligence/Machine Learning Tools to Enhance Risk Management in Clinical Trials

November 2021

Artifical Intelligence

In an era in which the size and complexity of clinical trials continues to increase in addition to unprecedented challenges caused by the COVID-19 pandemic, it has become essential to modernize risk management strategies to ensure patient safety and trial data accuracy. Risk management of clinical trials comprises of a series of processes to optimize error detection, which can be incorporated into multiple aspects of a clinical trial, including study development, patient recruitment, site monitoring, and data analysis. A successful risk management implementation can predict, control, and prevent possible areas of risk that can impact a clinical trial.1 The emergence of technological innovations in clinical research, including Artificial intelligence (AI), has enhanced risk management systems by deriving critical insights from vast clinical data, ultimately improving efficiency, accuracy, and patient safety.2

The key components of a risk management framework include:

  • Risk identification and assessment

  • Implementation of risk controls3

The cross-functional risk assessment needs to be holistic in approach, including information on potential issues and critical data points across the entire trial lifecycle. With increased emphasis on trial oversight in the clinical research landscape, there is a need to be able to proactively identify and assess risks, trends, and outliers in a timely fashion.4

Understanding AI

AI is a sophisticated set of entities which can receive inputs from the environment, interpret and learn from the inputs, and perform tasks based on what they have learned while improving through experience. While AI is a machine simulation of human intelligence processes, it can reveal complex patterns beyond human ability. AI encompasses a variety of applications including machine learning (ML), deep learning (DL), and natural language processing (NLP).2

  • Machine learning. Data analytical algorithms and statistical models are used to find patterns from data, with the goal to make predictions. ML is broadly categorized into supervised ML, in which the algorithm is trained on a labeled dataset to evaluate its accuracy, and unsupervised ML, in which the algorithm tries to derive insights from unlabeled data by extracting features and patterns on its own.5

  • Deep learning. A sub-field of ML based on artificial multi-layer neural networks that analyze and extract information from large data sets to predict outcomes. DL automates the extraction of data allowing for larger, unstructured data sets. The ML algorithm then finds patterns, “learns” what information produces ideal results, and optimizes future searches.

  • Natural language processing. Technology that relies on ML to extract meaning from textual information or natural language data. Optical character recognition (OCR) allows for automatic text recognition, by recognizing patterns and converting images of printed or handwritten text into a machine-readable format.5

Applications of AI to Reduce Risk and Increase Patient Safety in Clinical Trials

AI tools are delivering a wide range of applications to reduce risk and improve the quality of clinical trials. These applications include optimizing clinical study designs, patient recruitment and enrollment, site selection, patient monitoring, and signal detection. The AI initiatives in risk management systems deliver significant time and cost savings by providing rapid insights that facilitate human decision making.6

Predictive Analytics in Assessing Protocols and Clinical Trial Sites

Predictive Analytics is the practice of analyzing and extracting information from existing data sets to discover patterns and predict outcomes and trends. Specifically for risk management, it involves utilizing ML technology to forecast trends and generate predictive scores for clinical trial metrics. Advanced analytics provide site rankings for a holistic risk assessment of how sites are performing versus the metrics that are important to the study. From the composite site rankings, an output reveals the high-risk sites, Key Risk Indicators (KRIs), and potential outliers. This can be used to assess site performance and predict the sites that may have recruitment issues.7 The statistical algorithms are also used to identify subjects at risk by calculating forecasts of KRIs proactively and creating alerts when they cross specified thresholds. With insights into which patients may be at increased risk for a serious adverse event (SAE), issues can be avoided by determining an appropriate course of action faster. By performing a risk-assessment through ML, it is possible to identify clinical trial sites with the highest recruitment potential by mapping patient populations, which can reduce the risk of under-enrollment.8

Early iterations of risk management strategies focused on a single parameter to identify issues, which lead to false positive results. If a risk indicator from a site shows that 80% of subjects discontinued trial treatment prematurely but the site only has 5 patients, it should not be treated as the same level of risk (outlier) as a site with 100 patients and the same 80% discontinued rate. Advanced analytical models with ML take a holistic view of site data and consider multiple inputs to differentiate true risk from false positives which could be due to factors such as enrollment numbers. Because ML uses models to “learn” from each review and is not limited to the current trial, it can significantly reduce false positives and produce higher quality data.9

TRI’s data analytics team used ML tools including the Boruto Feature Selection in R with a Random Forest model to forecast if specific protocols would meet various milestones (i.e., Protocol in Development, Protocol Finalized, Study Activation by Site). The team put together an analytics dashboard (Milestone Analytics for Protocols) that looked at over 350 clinical trials and was able to identify bottlenecks occurring within intervals (Figure 1). For example, the ML models were used to predict the “Study Activation” to “Enrollment Complete” interval and provided insight into potential protocol delays. The Random Forest classifier ranked the most important features in the dataset for each interval. For the enrollment interval example, “Number of Protocol Version” and “Biodefense” features were found to be most relevant (Figure 2). Following 10 reruns of the model, the team used these consistently flagged features to run the model again as it uses random numbers. This led to the ML model predicting the enrollment interval of the protocols with 92.9% accuracy. Accurately predicting which protocols will meet milestones can reduce risk and significantly improve operational effectiveness.

Figure 1. TRI’s Milestone Analytics for Protocols Dashboard filtered for First Site Activated to Active-Enrollment Complete interval.

Figure 2. Random Forest classifier ranked the most important features for First Site Activated to Active-Enrollment Complete Interval (green = accepted feature, red = rejected feature).

Optimizing Patient Selection in Clinical Trials

ML predictive models have also been used in population-based studies to optimize recruitment and therefore boost the power of clinical trials. Many Alzheimer’s Disease (AD) trials have failed over the past decade due to recruiting the wrong population; specifically due to an inability to identify and include fast decliners. To enrich the selection strategy for an AD clinical trial, it is important to identify participants that show cognitive decline in the absence of treatment and those who will be responsive to the investigational agent.10 In a recent study, information was gathered from a cohort of 202 participants with AD at baseline, and ML models were trained to differentiate between participants who had declining vs. stable cognitive function. The researchers found that the ML predictive analytical models were more effective at identifying individuals likely to show cognitive decline (positive predictive value of 88.5% at 23 months) than the observed base-rate of cognitive decline (PPV of 63.6%) in the same sample.11

A single site retrospective study analyzed the use of ML to identify high-risk surgical patients from complex electronic health record (EHR) data. A total of 194 clinical features were constructed and aggregated from a data repository of surgical outcomes, following which ML models were developed to predict risk of post-operative complications from a dataset of 66,370 patients. The 42 ML models demonstrated “strong predictive performance” and were superior to heuristics that identify high-risk patients through manual datasets and the established National Surgical Quality Improvement Program (NSQIP) calculator.12

Automated De-Identification and Review of Source Documents

With increasing use of EHR systems, massive amounts of structured and unstructured data must be integrated and reviewed. Even as data becomes more accessible, it remains essential to protect data privacy and the confidentiality of research subjects. The HIPAA “Safe Harbor” method requires 18 data elements, called Protected Health Information (PHI), to be removed from source documents in order for the documents to be considered de-identified. De-identification of source documents is used to remove any data that could identify a subject from a dataset. It has conventionally been performed manually, which is often a time-consuming process. The challenge in implementing automated de-identification tools to unstructured data is that PHI needs to be reliably identified before it can be removed; furthermore, non-PHI data should not be erroneously removed.13 With the emergence of AI, automated de-identification systems have shown promising results and high accuracy even in analyzing unstructured narrative texts that do not have a standard layout or enumerated sets of fields. Recent innovations in de-identification techniques use NLP with optical character recognition and deep neural networks to automate this manual processing task. These applications are being used to find and recognize patient-related identifiable information, anonymize, and redact them from unstructured text. NLP extracts information whenever text is transformed, and this data is fed into ML algorithms which improve over time in being able to identify where text should be redacted.14 Specific NLP techniques commonly applied to de-identification include:

  • Rule-based extraction. Lookup lists are formed to identify PHI and text is de-identified by using rules and pattern matching. Rule-based extraction methods are a strong baseline, but it is not always feasible to create encompassing lists, dictionaries, and anticipate spelling differences.

  • Feature-based ML. The system sets features or data inputs to target variables, and the ML model assigns labels to the feature indicating the presence of PHI. Each word is labeled as a PHI or not. Example features are if the token is capitalized, or if there is a word preceded by a title. As the model maps the data inputs, it learns patterns between the input and the target variable so that when new data is presented, the model can accurately make a prediction.

  • Neural methods. In these systems, the features are automatically learned by training on a labeled dataset.15

Recent advances in the field of NLP with Neural Networks (NN) have enabled automated systems to achieve better results in the field of natural entity recognition. A recent study comparing de-identification methods revealed that ML methods (92%) outperformed the rule-based system (81%) by achieving higher precision, and the deep neural method performed the strongest as it provided 10% improvement in recall over the traditional feature-based ML method.16 Not only are NLP systems capable of de-identifying source documents, they are also able to translate large amounts of information from one language to another and perform speed literature searches for relevant information. These tools help accelerate the process to analyze data, determine risks, and promote patient safety.

Role of AI in Signal Detection

A signal refers to information that suggests a new causal association between an intervention and an adverse event. Signal detection has conventionally been a reactive process implemented for post-marketing drug safety surveillance. In pharmacovigilance, signal detection involves utilizing data-mining techniques based on the concept of disproportionality analysis (DPA) methods to look for patterns that could potentially change the benefit to risk ratio associated with the use of a drug. However, DPA measures including the reporting odds ratio (ROR) and information component (IC) are not adjusted for confounding and are subject to severe bias. To overcome the limitations, ML and NLP tools are being applied to large datasets to rapidly search for signals based on key words or phrases.17 A recent study evaluated the predictive capabilities of ML algorithms (gradient boosted machine and random forest) in detecting unknown safety signals compared to traditional disproportionality methods. They reported that the gradient boosted machine learning tool (AUC: 0.97) far outperformed the ROR (AUC: 0.55) and IC (AUC: 0.49) in predicting signals and detected a greater number of unknown signals than the DPA methods.18 As signals are often hidden within vast data, ML tools can be used to optimize the signal detection process by proactively monitoring risk across multiple unstructured data sets.

TRI is a full-service CRO that provides key services in support of risk management processes to meet client needs, including protocol milestone analytics and data visualization packages. TRI leverages AI/ML tools to analyze large data sets and proactively assess and prevent risks. The real-time analytics and recommendations generated from predictive intelligence provide critical oversight for the improvement of data and study quality.

About the Author

Dr. Dayu Srinivasan is a Safety and Pharmacovigilance Specialist at TRI with extensive experience in oncology research and neuroimaging. He holds an MD in Radiology and recently earned an MPH degree from George Washington University.


  1. Hawwash, B. (2019). Using AI & Machine Learning to Better Understand Data and Manage Risk. Applied Clinical Trials.

  2. Bhatt A. (2021). Artificial intelligence in managing clinical trial design and conduct: Man and machine still on the learning curve. Perspectives in clinical research, 12(1), 1–3.

  3. TransCelerate Biopharma inc. (2021). Risk Based Monitoring.

  4. Barnes, B., Stansbury, N., Brown, D., Garson, L., Gerard, G., Piccoli, N., Jendrasek, D., May, N., Castillo, V., Adelfio, A., Ramirez, N., McSweeney, A., Berlien, R., & Butler, P. J. (2021). Risk-Based Monitoring in Clinical Trials: Past, Present, and Future. Therapeutic innovation & regulatory science, 55(4), 899–906.

  5. IBM Cloud Education. (2020, July 15). Machine Learning. IBM Cloud Learn Hub.

  6. Lorhan Corporation. (2021, April 6). How AI Can Enhance the Clinical Data Review and Cleaning Process.

  7. Andrianov A. (2017, August 8). Predictive Analytics in Risk-Based Monitoring -Part II. Cyntegrity.

  8. Di Salvo, C. (2021, May 26). Innovations in AI Risk-Based Monitoring in Clinical Research. Pharma Features.

  9. Patil, R. (2017, January 23). Predictive Analytics and the Future of RBM: How advances in Risk-Based Monitoring will enable a more proactive approach to identify and mitigate potential risks. Contract Pharma.

  10. Mehta, D., Jackson, R., Paul, G., Shi, J., & Sabbagh, M. (2017). Why do trials for Alzheimer's disease drugs keep failing? A discontinued drug perspective for 2010-2015. Expert opinion on investigational drugs, 26(6), 735–739.

  11. Ezzati, A., Lipton, R. B., & Alzheimer’s Disease Neuroimaging Initiative (2020). Machine Learning Predictive Models Can Improve Efficacy of Clinical Trials for Alzheimer's Disease. Journal of Alzheimer's disease : JAD, 74(1), 55–63.

  12. Corey, K. M., Kashyap, S., Lorenzi, E., Lagoo-Deenadayalan, S. A., Heller, K., Whalen, K., Balu, S., Heflin, M. T., McDonald, S. R., Swaminathan, M., & Sendak, M. (2018). Development and validation of machine learning models to identify high-risk surgical patients using automatically curated electronic health record data (Pythia): A retrospective, single-site study. PLoS medicine, 15(11), e1002701.

  13. Meystre, S. M., Friedlin, F. J., South, B. R., Shen, S., & Samore, M. H. (2010). Automatic de-identification of textual documents in the electronic health record: a review of recent research. BMC medical research methodology, 10, 70.

  14. Liu, Z., Chen, Y., Tang, B., Wang, X., Chen, Q., Li, H., Wang, J., Deng, Q., & Zhu, S. (2015). Automatic de-identification of electronic medical records using token-level and character-level conditional random fields. Journal of biomedical informatics, 58 Suppl (Suppl), S47–S52.

  15. Ahmed, T., Aziz, M., & Mohammed, N. (2020). De-identification of electronic health record using neural network. Scientific reports, 10 (1), 18600.

  16. Trienes, J., Trieschnigg, D., Seifert, C., Hiemstra, D., (2020, January). Comparing Rule-based, Feature-based and Deep Neural Methods for De-identification of Dutch Medical Records. ResearchGate.

  17. Greenhalgh, M. (2020, January 6). The Role of AI in Signal Detection. Informa Pharma Intelligence.

  18. Bae, J. H., Baek, Y. H., Lee, J. E., Song, I., Lee, J. H., & Shin, J. Y. (2021). Machine Learning for Detection of Safety Signals From Spontaneous Reporting System Data: Example of Nivolumab and Docetaxel. Frontiers in pharmacology, 11, 602365.

© Technical Resources International, Inc.  •  •  Phone: 301-564-6400