All in on AI, Transparent & Explainable

Published on: November 26, 2024

Share

AI is an incredibly powerful tool that can empower complex data analysis and decision-making. To fully leverage the capabilities of AI, transparency and explainability are required to provide understandable and trustworthy explanations for AI outputs.

Understanding Transparency & Explainability of AI

Transparency and explainability refer to the idea that all stakeholders affected by the outcome of an AI system should fully understand the inner workings of that system, from how it is developed, trained, and deployed to the factors that affect how it arrives at a decision.

Transparency and explainability play a vital role in fostering trust between AI systems and users. AI systems often employ "black box" algorithms that are complex and opaque. Transparency and explainability measures help users understand why an AI System generated a specific result and reject an AI System’s output if necessary, which ultimately helps them make informed decisions.

Shining a Light Into the “Black Box” of AI

In terms of explainability, machine learning models can be divided into two general classes: black-box and white-box.

Black-Box Models

Black-box models are highly complex models, such as deep neural networks and gradient boosting models. While these models can accurately analyze a high volume of complex data, their inner workings can be difficult to understand, even for domain experts and developers.

White-Box Models

In contrast, white-box models are inherently easier to understand, such as linear regressions and decision trees. Typically, these models will clearly show the relationships between influencing variables and output predictions. However, they generally provide lower predictive performance and may not always be capable of modelling large, complex datasets.

Importance of Transparent and Explainable AI

One of the primary challenges faced by AI developers is this trade-off between model accuracy and explainability. On one hand, the predictive accuracy of AI models should be a priority, so that they can identify complex, nonlinear relationships between variables, and provide valuable insights that drive informed decision-making.

However, the more sophisticated an AI system is, the harder it becomes to explain how it operates, which can negatively affect the integrity of its outputs.

For example, AI models are vulnerable to biases stemming from unrepresentative data, leading to outputs that can perpetuate inequitable outcomes. Furthermore, AI models can experience “model drift,” a phenomenon in which model performance degrades over time because real-world data differs from the data the model was trained on. A lack of explainability can hinder human operators from monitoring model outputs, lead to poorly informed decision-making, and undermine trust in AI systems.

Explainability allows developers to tackle these issues, allowing us to shine a light into the “black box” of AI. Depending on the use case, many strategies to improve the explainability of AI models have been explored, including the usage of comprehensible text, approximations, or visualizations. At Sanofi, all AI Systems comply with our documentation standards during the development phase.

Transparency & Explainabilty Use Case

Adverse events are a major concern in many clinical trials. They are a common contributor to clinical trial failures, and their causes are often complicated and difficult to untangle.

At Sanofi, we are using AI to predict trial participants that are at high risk or low risk of an adverse event. Our developers are prioritizing transparency and explainability by using glass-box models, which are models with interpretability built in. They pair the model with a model card which captures information such as how the model was trained, the model’s characteristics, the model’s performance and the model’s inferred outputs.

Usage of Explainable Models

The analysis of clinical trial data is a challenging use case. When selecting a machine learning model, developers must balance the need to analyze a vast amount of complex data with the need for explainable model outputs, as these outputs can directly impact patients’ lives. Our developers used Explainable Boosting Machines (EBMs), a glass-box model designed to achieve accuracy comparable to that of state-of-the-art black-box models, while remaining completely interpretable.  This algorithm is explainable because it reports feature importance, a set of numerical values that show how much each feature (e.g. age, lab results, medical history) contributed to the model’s decision on whether individuals were at high risk or low-risk of adverse events. Our developers reported these values in model cards for the entire testing cohort and provided a way to interrogate the model’s prediction for each individual trial participant within the cohorts. 

Reporting Dataset Characteristics

Our model cards contain information about the study populations used to create our dataset, as well as detailed descriptions of the portions of the dataset used to train and test our machine learning model. This allows developers to identify if the dataset is imbalanced, or if there are any major differences in the population between the training and testing datasets, both of which could reduce model performance or introduce bias into our results. 

Model Performance Monitoring

The developers also regularly capture numerous metrics related to the model’s predictive accuracy, and continually update the model card over time. This allows them to determine if the model is experiencing model drift. They can use this information to determine when it is appropriate to retrain the model or update the datasets.

At Sanofi, we understand that transparency and explainability are vital in ensuring that our AI systems are trustworthy, accountable, and aligned with emerging AI regulations. Ensuring that the outputs and decisions of our AI systems are understandable allows us to navigate the future of AI responsibly.

Share

Explore More

All in on AI, Environmentally Responsible

Sanofi at VivaTech

Exploring Our Digital Transformation: Strategies and Innovations