Skip to main content

What are the contextual factors?

Contents

At a glance

Five contextual factors have an effect on the purpose an individual wishes to use an explanation for, and on how you should deliver your explanation:

  • domain you work in;
  • impact on the individual;
  • data used;
  • urgency of the decision; and
  • audience it is being presented to.

In more detail

Introduction

When constructing an explanation for an individual, there are several factors about the context an AI-assisted decision is made in. These have an effect on the type of explanation which people will find useful and the purposes they wish to use it for.

From the primary research we carried out, particularly with members of the public, we identified five key contextual factors affecting why people want explanations of AI-assisted decisions. These contextual factors are set out below, along with suggestions for which explanations to prioritise in delivering an explanation of an AI-assisted. You should consider these factors at all stages of the process outlined in Part 2 of this guidance.

When considering these contextual factors, keep in mind that providing explanations to decision recipients will also educate them about AI systems. It may therefore be worth thinking about the information you can provide in advance of processing in order to help develop knowledge and understanding of AI use among the general public.

Domain factor

What is this factor?

By ‘domain’, we mean the setting or the sector you deploy your AI model in to help you make decisions about people. This can affect the explanations people want. For instance, what people want to know about AI-assisted decisions made in the criminal justice domain can differ significantly from other domains such as healthcare.

Likewise, domain or sector specific explanation standards can affect what people expect from an explanation. For example, a person receiving an AI-assisted mortgage decision will expect to learn about the reasoning behind the determination in a way that matches established lending standards and practices.

Which explanations should we prioritise?

Considering the domain factor is perhaps the most crucial determiner of what explanations you should include and prioritise when communicating with affected individuals. If your AI system is operating in a safety-critical setting, decision recipients will obviously want appropriate safety and performance explanations. However, if your system is operating in a domain where bias and discrimination concerns are prevalent, they are likely to want you to provide a fairness explanation.

In lower-stakes domains such as e-commerce, it is unlikely that people will want or expect extensive explanations of the safety and performance of the outputs of recommender systems. Even so, in these lower impact domains, you should explain the basic rationale and responsibility components (as well as all other relevant explanation types) of any decision system that affects people.

For example ‘low’ impact applications such as product recommendations and personalisation (eg of advertising or content), may give rise to sensitivities around targeting particular demographics, or ignoring others (eg advertising leadership roles targeted at men). These raise obvious issues of fairness and impact on society, increasing the importance of explanations addressing these issues.

Impact factor

What is this factor?

The ‘impact’ factor is about the effect an AI-assisted decision can have on an individual and wider society. Varying levels of severity and different types of impact can change what explanations people will find useful, and the purpose the explanation serves.

Are the decisions safety-critical, relating to life or death situations (most often in the healthcare domain)? Do the decisions affect someone’s liberty or legal status? Is the impact of the decision less severe but still significant (eg denial of a utility or targeting of a political message)? Or is the impact more trivial (eg being directed to a specific ticket counter by an AI system that sorts queues in an airport)?

Which explanations should we prioritise?

In general, where an AI-assisted decision has a high impact on an individual, explanations such as fairness, safety and performance, and impact are often important, because individuals want to be reassured about the safety of the decision, to trust that they are being treated fairly, and to understand the consequences.

However, the rationale and responsibility explanations can be equally as important depending on the other contextual factors. For example, if the features of the data used by the AI model are changeable, or the inferences drawn are open to interpretation and can be challenged.

Considering impact as a contextual factor is not straightforward. There is no hard and fast rule. You should do it on a case by case basis, and consider it in combination with all the other contextual factors. It should also involve inclusive dialogue across the fields of expertise that are involved in the design, development, and deployment of the AI system. Getting together different team members, who have technical, policy, compliance, and domain expertise can provide a more informed vision of the impact factor of an AI model.

Data factor

What is this factor?

‘Data’ as a contextual factor relates to both the data used to train and test your AI model, as well as the input data at the point of the decision. The type of data used by your AI model can influence an individual’s willingness to accept or contest an AI-assisted decision, and the actions they take as a result.

This factor suggests that you should think about the nature of the data your model is trained on and uses as inputs for its outputs when it is deployed. You should consider whether the data is biological or physical (eg biomedical data used for research and diagnostics), or if it is social data about demographic characteristics or measurements of human behaviour.

You should also consider whether an individual can change the outcome of a decision. If the factors that go into your decision are ones that can be influenced by changes to someone’s behaviour or lifestyle, it is more likely that individuals may want to make these changes if they don’t agree with the outcome.

For example, if a bank loan decision was made based on a customer’s financial activity, the customer may want to alter their spending behaviour to change that decision in the future. This will affect the type of explanation an individual wants. However, if the data is less flexible, such as biophysical data, it will be less likely that an individual will disagree with the output of the AI system. For example in healthcare, an output that is produced by an AI system on a suggested diagnosis based on genetic data about a patient is more ‘fixed’ – this is not something the patient can easily change.

Which explanations should we prioritise?

It will often be useful to prioritise the rationale explanation, for both social data and biophysical data. Where social data is used, individuals receiving an unfavourable decision can understand the reasoning and learn from this to appropriately adapt their behaviour for future decisions. For biophysical data, this can help people understand why a decision was made about them.

However, where biophysical data is used, such as in medical diagnoses, individuals may prefer to simply know what the decision outcome means for them, and to be reassured about the safety and reliability of the decision. In these cases it makes sense to prioritise the impact and safety and performance explanations to meet these needs.

On the other hand, where the nature of the data is social, or subjective, individuals are more likely to have concerns about what data was taken into account for the decision, and the suitability or fairness of this in influencing an AI-assisted decision about them. In these circumstances, the data and fairness explanations will help address these concerns by telling people what the input data was, where it was from, and what measures you put in place to ensure that using this data to make AI-assisted decisions does not result in bias or discrimination.

Urgency factor

What is this factor?

The ‘urgency’ factor concerns the importance of receiving, or acting upon the outcome of an AI-assisted decision within a short timeframe. What people want to know about a decision can change depending on how little or much time they have to reflect on it.

The urgency factor recommends that you give thought to how urgent the AI-assisted decision is. Think about whether or not a particular course of action is often necessary after the kind of decisions you make, and how quickly you need to take that action.

Which explanations should we prioritise?

Where urgency is a key factor, it is more likely that individuals will want to know what the consequences are for them, and to be reassured that the AI model helping to make the decision is safe and reliable. Therefore, the impact and safety and performance explanations are suitable in these cases. This is because these explanations will help individuals to understand how the decision affects them, what happens next, and what measures and testing were implemented to maximise and monitor the safety and performance of the AI model.

Audience factor

What is this factor?

‘Audience’ as a contextual factor is about the individuals you are explaining an AI-assisted decision to. The groups of people you make decisions about, and the individuals within those groups have an effect on what type of explanations are meaningful or useful for them.

What level of expertise (eg about AI) do they have about what the decision is about? Are a broad range of people subject to decisions you make (eg the UK general public), which indicates that there might also be a broad range of knowledge or expertise? Or are the people you make decisions about limited to a smaller subset (eg your employees), suggesting they may be more informed on the things you are making decisions about? Also consider whether the decision recipients require any reasonable adjustments in how they receive the explanation (Equality Act 2010).

As a general rule, it is a good idea to accommodate the explanation needs of the most vulnerable individuals. You should ensure that these decision recipients are able to clearly understand the information that you are giving them. Using plain, non-technical language and visualisation tools, where possible, may often help.

Note as well that, while we are focusing on the decision recipient, you are also likely to have to put significant forethought into how you will provide other audiences with appropriate information about the outputs of your AI model. For instance, in cases where the models are supporting decision-making, you will have to provide the end-users or implementers of these models with a depth and level of explanation that is appropriate to assist them in carrying out evidence-based reasoning in way that is context-sensitive and aware of the model’s limitations. Likewise, in instances where models and their results are being reviewed by auditors, you will have to provide information about these systems at a level and depth that is fit for the purpose of the relevant review.

Which explanations should we prioritise?

If the people you are making AI-assisted decisions about are likely to have some domain expertise, you might consider using the rationale explanation. This is because you can be more confident that they can understand the reasoning and logic of an AI model, or a particular decision, as they are more familiar with the topic of the decisions. Additionally, if people subject to your AI-assisted decisions have some technical expertise, or are likely to be interested in the technical detail underpinning the decision, the safety and performance explanation will help.

Alternatively, where you think it’s likely the people will not have any specific expertise or knowledge about either the topic of the decision or its technical aspects, other explanation types such as responsibility, or particular aspects of the safety and performance explanation may be more helpful. This is so that people can be reassured about the safety of the system, and know who to contact to ask about an AI decision.

Of course, even for those with little knowledge of an area, the rationale explanation can still be useful to explain the reasons why a decision was made in plain and simple terms. But there may also be occasions where the data used and inferences drawn by an AI model are particularly complex (see the ‘data’ factor above), and individuals would rather delegate the rationale explanation to a relevant domain expert. The expert can then review and come to their own informed conclusions about the validity or suitability of the reasons for the decision (eg a doctor in a healthcare setting).