Skip to main content

How do we ensure transparency in AI?

Contents

Latest updates

15 March 2023 - This is a new chapter with new content new high-level content on the transparency principle as it applies to AI. The main guidance on transparency and explainability resides within our existing Explaining Decisions Made with AI product.

What are our transparency obligations towards people?

You need to be transparent about how you process personal data in an AI system, to comply with the principle of transparency.

Before you begin your processing, you must consider your transparency obligations towards individuals whose personal data you plan to process. The core issues about AI and the transparency principle are addressed in ‘Explaining decisions made with AI’ guidance, so are not discussed in detail here

At a high level, you need to include the following in the privacy information:

  • your purposes for processing their personal data;
  • your retention periods for that personal data; and
  • who you will share it with.

If you collect data directly from individuals, you must provide that privacy information to them at the time you collect it, before you use it to train a model or apply that model on those individuals. If you collect it from other sources, you must provide this information within a reasonable period and no later than one month, or even earlier if you contact that person or disclose that data to someone else.

We have published the guidance on ExplAIning Decisions Made with AI that sets out how you can let people know how you use their data in your AI lifecycle. It offers good practice that can help you comply with the transparency principle.