Skip to main content

Guidance on AI and data protection

Contents

This guidance was updated on 15 March 2023.

The Guidance on AI and Data Protection has been updated after requests from UK industry to clarify requirements for fairness in AI. It also delivers on a key ICO25 commitment, which is to help organisations adopt new technologies while protecting people and vulnerable groups.

This update supports the UK government’s vision of a pro-innovation approach to AI regulation and more specifically its intention to embed considerations of fairness into AI.

We continue to engage with the UK government, along with our partners within the Digital Regulation Cooperation Forum (DRCF), on its broader proposals on regulatory reform.

The ICO supports the government’s mission to ensure that the UK’s regulatory regime keeps pace with and responds to new challenges and opportunities presented by AI. We look forward to supporting the implementation of its forthcoming White Paper on AI Regulation.

We will continue to ensure ICO’s AI guidance is user friendly, reduces the burden of compliance for organisations and reflects upcoming changes in relation to AI regulation and data protection.

For ease of use and given the foundational nature of data protection principles we decided to restructure the guidance moving some of the existing content into new chapters. Acknowledging the fast pace of technological development the ICO believes more updates will be required in the future so using data protection’s principles as the core of this expanding work makes editorial and operational sense.

We outlined below where new content resides so past readers of the Guidance on AI and Data Protection can navigate the changes at speed.

What are the accountability and governance implications of AI?

Change overview: This is an old chapter with new additions

What you need to know:

How do we ensure transparency in AI?

Change overview: This is a new chapter with new content

What you need to know:

  • We have created a standalone chapter with new high-level content on the transparency principle as it applies to AI. The main guidance on transparency and explainability resides within our existing Explaining Decisions Made with AI product.

How do we ensure lawfulness in AI?

Change overview: This is a new chapter with old content - moved from the previous chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ - and two added new sections.

What you need to know:

What do we need to know about accuracy and statistical accuracy?

Change overview: This is new chapter with old content.

What you need to know:

  • Following the restructuring under the data protection principles, the statistical accuracy content – that used to reside with the chapter ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’ - has moved into a new chapter that will focus on the accuracy principle. Statistical accuracy continues to remain key for fairness but we felt it was more appropriate to host it under a chapter that focuses on the accuracy principle.

Fairness in AI

Change overview: This is a new chapter with new and old content.

What you need to know:

The old content was extracted from the former chapter titled ‘What do we need to do to ensure lawfulness, fairness, and transparency in AI systems?’. The new content includes information on:

  • Data protection’s approach to fairness, how it applies to AI and a non-exhaustive list of legal provisions to consider.
  • The difference between fairness, algorithmic fairness, bias and discrimination.
  • High level considerations when thinking about evaluating fairness and inherent trade-offs.
  • Processing personal data for bias mitigation.
  • Technical approaches to mitigate algorithmic bias.
  • How are solely automated decision-making and relevant safeguards linked to fairness, and key questions to ask when considering Article 22 of the UK GDPR.

Annex A: Fairness in the AI lifecycle

Change overview: This is a new chapter with new content

  • This section is about data protection fairness considerations across the AI lifecycle, from problem formulation to decommissioning. It sets outs why fundamental aspects of building AI such as underlying assumptions, abstractions used to model a problem, the selection of target variables or the tendency to over-rely on quantifiable proxies may have an impact on fairness. This chapter also explains the different sources of bias that can lead to unfairness and possible mitigation measures. Technical terms are also explained in the updated glossary.

Glossary

Change overview: This is an old chapter with old and new content.

What you need to know:

New additions include definitions of:

  • Affinity groups, algorithmic fairness, algorithmic fairness constraints, bias mitigation algorithm, causality, confidence interval, correlation, cost function, dataset labellers, decision, construct and observed space, decision boundary, decision tree, downstream effects, ground truth, inductive bias, in-processing, hyperparameters, multi-criteria optimisation, objective function, post-processing bias mitigation, regularisation, redundant encodings, reward function, use case, target variable, variance.