Skip to main content

Annexe 1: Example of building and presenting an explanation of a cancer diagnosis

Contents

Bringing together our guidance, the following example shows how a healthcare organisation could use the steps we have outlined to help them structure the process of building and presenting their explanation to an affected patient.

Task 1: Select priority explanation types by considering the domain, use case and impact on the individuals

First, the healthcare organisation familiarises itself with the explanation types in this guidance. Based on the healthcare setting and the impact of the cancer diagnosis on the patient’s life, the healthcare organisation selects the explanation types that it determines are a priority to provide to patients subject to its AI-assisted decisions. It documents its justification for these choices:

Priority explanation types:

Rationale – Justifying the reasoning behind the outcome of the AI system to maintain accountability, and useful for patients if visualisation techniques of AI explanation are available for non-experts…

Impact – Due to high impact (life/death) situation, important for patients to understand effects and next steps…

Responsibility – Non-expert audience likely to want to know who to query the AI system’s output with…

Safety and performance - Given data and domain complexity, this may help reassure patients about the accuracy, safety and reliability of the AI system’s output…

Other explanation types:

Data – Simple detail on input data as well as on original training/validation dataset and any external validation data

Fairness – Because this is likely biophysical data, as opposed to social or demographic data, fairness issues will arise in areas such as data representativeness and selection biases, so providing information about bias-mitigating efforts relevant to these may be necessary…

The healthcare organisation formalises these explanation types in the relevant part of its policy on information governance:

Information governance policy

Use of AI

Explaining AI decisions to patients

Types of explanations:

  • Rationale
  • Impact
  • Responsibility
  • Safety and Performance
  • Data
  • Fairness

Task 2: Collect and pre-process your data in an explanation-aware manner

The data the healthcare organisation uses has a bearing on its impact and risk assessment. The healthcare organisation therefore chooses the data carefully and considers the impact of pre-processing to ensure they are able to provide an adequate explanation to the decision recipient.

Rationale

Information on how data has been labelled and how that shows the reasons for classifying, for example, certain images as tumours.

Responsibility

Information on who or which part of the healthcare organisation (or, if the system is procured, who or which part of the third-party vendor organisation) is responsible for collecting and pre-processing the patient’s data. Being transparent about the process from end to end can help the healthcare organisation to build trust and confidence in their use of AI.

Data

Information about the data that has been used, how it was collected and cleaned, and why it was chosen to train the model. Details about the steps taken to ensure the data was accurate, consistent, up to date, balanced and complete.

Safety and performance

Information on the model’s selected performance metrics and how, given the available data included in training the model, the healthcare organisation or third party vendor chose the accuracy-related measures they did. Also, information about the measures taken to safeguard that the preparation and pre-processing of the data ensures the system’s robustness and reliability under harsh, uncertain or adversarial run-time conditions.

Task 3: Build your system to ensure you are able to extract relevant information for a range of explanation types

The healthcare organisation, or third party vendor, decides to use an artificial neural network to sequence and extract information from radiologic images. While this model is able to predict the existence and types of tumours, the high-dimensional character of its processing makes it opaque.

The model’s design team has chosen supplementary ‘salience mapping’ and ‘class activation mapping’ tools to help them visualise the critical regions of the images that are indicative of malign tumours. These tools render the trouble-areas visible by highlighting the abnormal regions. Such mapping-enhanced images then allow technicians and radiologists to gain a clearer understanding of the clinical basis of the AI model’s cancer prediction.

This enables them to ensure that the model’s output is supporting evidence-based medical practice and that the model’s results are being integrated into other clinical evidence that underwrites these technicians’ and radiologists’ professional judgment.

Task 4: Translate the rationale of your system’s results into useable and easily understandable reasons

The AI system the hospital uses to detect cancer produces a result, which is a prediction that a particular area on an MRI scan contains a cancerous growth. This prediction comes out as a probability, with a particular level of confidence, measured as a percentage. The supplementary mapping tools subsequently provide the radiologist with a visual representation of the cancerous region.

The radiologist shares this information with the oncologist and other doctors on the medical team along with other detailed information about the performance measures of the system and its certainty levels.

For the patient, the oncologist or other members of the medical team then put this into language, or another format, that the patient can understand. One way the doctors choose to do this is through visually showing the patient the scan and supplementary visualisation tools to help explain the model’s result. Highlighting the areas that the AI system has flagged is an intuitive way to help the patient understand what is happening. The doctors also indicate how much confidence they have in the AI system’s result based on its performance and uncertainty metrics as well as their weighing of other clinical evidence against these measures.

Task 5: Prepare implementers to deploy your AI system

Because the technician and oncologist are both using the AI system in their work, the hospital decides they need training in how to use the system.

Implementer training covers:

  • how they should interpret the results that the AI system generates, based on understanding how it has been designed and the data it has been trained on;
  • how they should understand and weigh the performance and certainty limitations of the system (ie how they view and interpret confusion matrices, confidence intervals, error bars, etc);
  • that they should use the result as one part of their decision-making, as a complement to their existing domain knowledge;
  • that they should critically examine whether the AI system’s result is based on appropriate logic and rationale; and
  • that in each case they should prepare a plan for communicating the AI system’s result to the patient, and the role that result has played in the doctor’s judgement.

This includes any limitations in using the system.

Task 6: Consider how to build and present your explanation

The explanation types the healthcare organisation has chosen each has a process-based and outcome-based explanation. The quality of each explanation is also influenced by how they collect and prepare the training and test data for the AI model they choose. They therefore collect the following information for each explanation type:

Rationale

Process-based explanation: information to show that the AI system has been set up in a way that enables explanations of its underlying logic to be extracted (directly or using supplementary tools); and that these explanations are meaningful for the patients concerned.

Outcome-based explanation: information on the logic behind the model’s results and on how implementers have incorporated that logic into their decision-making. This includes how the system transforms input data into outputs, how this is translated into language that is understandable to patients, and how the medical team uses the model’s results in reaching a diagnosis for a particular case.

Responsibility

Process-based explanation: information on those responsible within the healthcare organisations, or third party provider, for managing the design and use of the AI model, and how they ensured the model was responsibly managed throughout its design and use.

Outcome-based explanation: information on those responsible for using the AI system’s output as evidence to support the diagnosis, for reviewing it, and for providing explanations for how the diagnosis came about (ie who the patient can go to in order to query the diagnosis).

Safety and performance

Process-based explanation: information on the measures taken to ensure the overall safety and technical performance (security, accuracy, reliability, and robustness) of the AI model—including information about the testing, verification, and validation done to certify these.

Outcome-based explanation: Information on the safety and technical performance (security, accuracy, reliability, and robustness) of the AI model in its actual operation, eg information confirming that the model operated securely and according to its intended design in the specific patient’s case. This could include the safety and performance measures used.

Impact

Process-based explanation: measures taken across the AI model’s design and use to ensure that it does not negatively impact the wellbeing of the patient.

Outcome-based explanation: information on the actual impacts of the AI system on the patient.

The healthcare organisation considers what contextual factors are likely to have an effect on what patients want to know about the AI-assisted decisions it plans to make on a cancer diagnosis. It draws up a list of the relevant factors:

Contextual factors

Domain – regulated, safety testing…
Data – biophysical…
Urgency – if cancer, urgent…
Impact – high, safety-critical…
Audience – mostly non-expert…

The healthcare organisation develops a template for delivering their explanation of AI decisions about cancer diagnosis in a layered way:

Layer 1

  • Rationale explanation
  • Impact explanation
  • Responsibility explanation
  • Safety and Performance explanation

Delivery – eg the clinician provides the explanation face to face with the patient, supplemented by hard copy/ email information.

Layer 2

  • Data explanation
  • Fairness explanation

Delivery – eg the clinician gives the patient this additional information in hard copy/ email or via an app.