The ICO exists to empower you through information.

At a glance

To ensure that the decisions you make using AI are explainable, you should follow four principles:

  • be transparent;
  • be accountable;
  • consider the context you are operating in; and,
  • reflect on the impact of your AI system on the individuals affected, as well as wider society.

In more detail

Why are principles important?

AI-assisted decisions are not unique to one sector, or to one type of organisation. They are increasingly used in all areas of life. This guidance recognises this, so you can use it no matter what your organisation does. The principles-based approach of this guidance gives you a broad steer on what to think about when explaining AI-assisted decisions to individuals. Please note that these principles relate to providing explanations of AI-assisted decision to individuals, and complement the data protection principles in the GDPR.

The first two principles – be transparent and be accountable – share their names with GDPR principles, as they are an extension of the requirements under GDPR. This means that you are required to comply with your obligations under the GDPR, but we provide further guidance that should enable you to follow ‘best practice’ when explaining AI decisions.

What are the principles?

Each principle has two key aspects detailing what the principles are about and what they mean in practice. Parts of the guidance that support you to act in accordance with the different aspects of each principle are signposted.

Be transparent

What is this principle about?

The principle of being transparent is an extension of the transparency aspect of principle (a) in the GDPR (lawfulness, fairness and transparency).

In data protection terms, transparency means being open and honest about who you are, and how and why you use personal data.

Being transparent about AI-assisted decisions builds on these requirements. It is about making your use of AI for decision-making obvious and appropriately explaining the decisions you make to individuals in a meaningful way.

What are the key aspects of being transparent?

Raise awareness:

  • Be open and candid about:
    • your use of AI-enabled decisions;
    • when you use them; and
    • why you choose to do this.
  • Proactively make people aware of a specific AI-enabled decision concerning them, in advance of making the decision.

Meaningfully explain decisions:

Don’t just giving any explanation to people about AI-enabled decisions - give them:

  • a truthful and meaningful explanation;
  • written or presented appropriately; and
  • delivered at the right time.

(This is closely linked with the context principle.)

How can this guidance help us be transparent?

To help with raising awareness about your use of AI decisions read:

  • Policies and procedures section of ‘What Explaining AI means for your organisation’; and
  • Proactive engagement section in Task 6 of ‘Explaining AI in Practice’.

To support you with meaningfully explaining AI decisions read:

  • Policies and procedures section of ‘What explaining AI means for your organisation’;
  • Building your system to aid in a range of explanation types in Task 3 of ‘Explaining AI in Practice’.
  • Selecting your priority explanations in Task 1 of ‘Explaining AI in Practice’.
  • Explanation timing in Task 6 of ‘Explaining AI in practice’.

Be accountable

What is this principle about?

The principle of being accountable is derived from the accountability principle in the GDPR.

In data protection terms, accountability means taking responsibility for complying with the other data protection principles, and being able to demonstrate that compliance. It also means implementing appropriate technical and organisational measures, and data protection by design and default.

Being accountable for explaining AI-assisted decisions concentrates these dual requirements on the processes and actions you carry out when designing (or procuring/ outsourcing) and deploying AI models.

It is about ensuring appropriate oversight of your AI decision systems, and being answerable to others in your organisation, to external bodies such as regulators, and to the individuals you make AI-assisted decisions about.

What are the key aspects of being accountable?

Assign responsibility:

  • Identify those within your organisation who manage and oversee the ‘explainability’ requirements of an AI decision system, and assign ultimate responsibility for this.
  • Ensure you have a designated and capable human point of contact for individuals to query or contest a decision.

Justify and evidence:

  • Actively consider and make justified choices about how to design and deploy AI models that are appropriately explainable to individuals.
  • Take steps to prove that you made these considerations, and that they are present in the design and deployment of the models themselves.
  • Show that you provided explanations to individuals.

How can this guidance help us be accountable?

To help with assigning responsibility for explaining AI decisions read:

  • the Organisational roles and Policies and procedures sections of ‘What explaining AI means for your organisation’.

To support you with justifying the choices you make about your approach to explaining AI decisions read:

  • all the tasks in ‘Explaining AI in practice’.

To help you evidence this read:

  • the Policies and procedures and Documentation sections of ‘What explaining AI means for your organisation’.

Consider context

What is this principle about?

There is no one-size-fits-all approach to explaining AI-assisted decisions. The principle of considering context underlines this.

It is about paying attention to several different, but interrelated, elements that can have an effect on explaining AI-assisted decisions, and managing the overall process.

This is not a one-off consideration. It’s something you should think about at all stages of the process, from concept to deployment and presentation of the explanation to the decision recipient.

There are therefore several types of context that we address in this guidance. These are outlined in more detail in the ‘contextual factors’ section above.

What are the key aspects of considering context?

Choose appropriate models and explanation:

When planning on using AI to help make decisions about people, you should consider:

  • the setting in which you will do this;
  • the potential impact of the decisions you make;
  • what an individual should know about a decision, so you can choose an appropriately explainable AI model; and
  • prioritising delivery of the relevant explanation types.

Tailor governance and explanation:

Your governance of the ‘explainability’ of AI models should be:

  • robust and reflective of best practice; and
  • tailored to your organisation and the particular circumstances and needs of each decision recipient.

How can this guidance help us consider context?

To support your choice of appropriate models and explanations for the AI decisions you make read:

  • ‘Explaining AI in practice’.
  • The Contextual factors section of this document.

To help you tailor your governance of the explainability of AI decision systems you use read:

  • the Organisational roles and Policies and procedures sections of ‘What explaining AI means for your organisation’.

Reflect on impacts

What is this principle about?

In making decisions and performing tasks that have previously required the thinking and reasoning of responsible humans, AI systems are increasingly serving as trustees of human decision-making. However, individuals cannot hold these systems directly accountable for the consequences of their outcomes and behaviours.

The value of reflecting on the impacts of your AI system helps you explain to individuals affected by its decisions that the use of AI will not harm or impair their wellbeing.

This means asking and answering questions about the ethical purposes and objectives of your AI project at the initial stages.

You should then revisit and reflect on the impacts identified in the initial stages of the AI project throughout the development and implementation stages. If any new impacts are identified, you should document them, alongside any mitigating factors you implement where relevant. This will help you explain to decision recipients what impacts you have identified and how you have reduced any potentially harmful effects as far as possible.

What are the key aspects of reflecting on impacts?

Individual wellbeing:

Think about how to build and implement your AI system in a way that:

  • fosters the physical, emotional and mental integrity of affected individuals;
  • ensures their abilities to make free and informed decisions about their own lives;
  • safeguards their autonomy and their power to express themselves;
  • supports their abilities to flourish, to fully develop themselves, and to pursue their interests according to their own freely determined life plans;
  • preserves their ability to maintain a private life independent from the transformative effects of technology; and
  • secures their capacities to make well-considered, positive and independent contributions to their social groups and to the shared life of the community, more generally.

Wellbeing of wider society:

Think about how to build and implement your AI system in a way that:

  • safeguards meaningful human connection and social cohesion;
  • prioritises diversity, participation and inclusion;
  • encourages all voices to be heard and all opinions to be weighed seriously and sincerely;
  • treats all individuals equally and protects social equity;
  • uses AI technologies as an essential support for the protection of fair and equal treatment under the law;
  • utilises innovation to empower and to advance the interests and well-being of as many individuals as possible; and
  • anticipates the wider impacts of the AI technologies you are developing by thinking about their ramifications for others around the globe, for the biosphere as a whole and for future generations.

How can this guidance help us reflect on impacts?

For help with reflecting on impacts read:

  • the different types of explanation above; and
  • ‘Explaining AI in practice’.

To support you with justifying the choices you make about your approach to explaining AI decisions read:

To help you evidence this read:

How do the principles relate to the explanation types?

The principles are important because they underpin how you should explain AI-assisted decisions to individuals. Here we set out how you can put them into practice by directly applying them through the explanations you use:

Principle AI explanation and relevant considerations
Be transparent

Rationale

  • What is the technical logic or reasoning behind the model’s output?
  • Which input features, parameters and correlations played a significant role in the calculation of the model’s result and how?
  • How can you explain the technical rationale underlying the model’s output in easily understandable reasons that may be subjected to rational evaluation by affected individuals or their representatives?
  • How can you apply the statistical results to the specific circumstances of the individual receiving the decision?

Data

• What data did you use to train the model?
• Where did the data come from?
• How did you ensure the quality of the data you used?

Be accountable

Responsibility

  • Who is accountable at each stage of the AI system’s design and deployment, from the initial phase of defining outcomes to the concluding phase of providing explanations?
  • What are the mechanisms they will be held accountable by?
  • How have you made design and implementation processes traceable and auditable across the entire project?
Consider context

See Task 1 of ‘Explaining AI in practice’ for more information on how context matters when choosing which explanation type to use, and which AI model.

See the section above on contextual factors to see how these can help you choose which explanation types to prioritise in presenting your explanation to the decision recipient.

Reflect on impacts

Fairness

  • Do the AI system’s outputs have discriminatory effects?
  • Have you sufficiently integrated the objectives of preventing discrimination and of mitigating bias into the design and implementation of the system?
  • Have you incorporated formal criteria of fairness that determine the distribution of outcomes into the system and made these explicit to customers in advance?
  • Has the model prevented discriminatory harm?

Safety and performance

  • Is the AI system safe and technically sustainable when operating in practice?
  • Is the system’s operational integrity worthy of public trust?
  • Have you designed, verified, and validated the model in a way that sufficiently ensures that it is secure, accurate, reliable, and robust?
  • Have you taken sufficient measures to ensure that the system dependably operates in accordance with its designers’ expectations when confronted with unexpected changes, anomalies, and perturbations?

Impact

  • Have you sufficiently considered impacts on the wellbeing of affected individuals and communities from start to finish of the AI model’s design and deployment?