Skip to main content

How do we use profiling tools fairly and transparently?

Contents

In detail

How do we ensure our use of profiling tools is fair? 

The fairness principle in data protection law means you must only process personal information in ways that:

  • people would reasonably expect; and
  • could not have an unjustified adverse effect on them.

Fairness in a data protection context is about how you go about processing personal information, as well as the outcome of the processing (ie the impact on people).

When carrying out profiling in your trust and safety systems, you must not process people’s personal information in ways that they might find unexpected, misleading or unforeseen. 

Fairness is closely linked to transparency. Explaining to your users how you use their personal information is part of ensuring your processing is fair. (See the section on How should we tell people what we’re doing? for more information.) 

If you use profiling tools that predict or infer data about users’ characteristics, behaviours or other attributes, you must ensure your systems are sufficiently statistically accurate and avoid discrimination. (See the section on What about statistical accuracy? for more information.) Your profiling tools are less likely to be fair if they consistently make the wrong judgments about people or lead to unjust discrimination.

This does not mean that every inference made by your profiling tools has to be correct. But you do need to factor in the possibility of them being incorrect and the impact this may have on any moderation decisions you may take on the basis of your profiling, as well as the ultimate impact these may have on users. 

We recognise that AI-based tools often involve making trade-offs related to accuracy (ie precision and recall). You should ensure that the trade-offs you make are appropriate, taking into account the outcomes of your profiling and its consequences for users. (See the section on How do we ensure the accuracy of personal information in our profiling tools? for further discussion.)

You should regularly review your profiling tools to minimise the risk of unfair outcomes for users and check your tools are delivering the intended benefits. 

Part of processing users’ personal information fairly also includes providing them with a means for redress, where appropriate. For example, if you have taken a decision about them based on profiling that the user thinks is incorrect. Depending on the purpose of the profiling tool and the impact of the profiling on users, you should provide a mechanism for users to contest decisions made about them based on your profiling tools. 

This will also support you in upholding a user’s right to contest decisions that fall under the scope of article 22. Article 22 applies if you are making solely automated decisions about users based on profiling that have legal or similarly significant effect on them. Where this is the case, you must provide a way for users to contest decisions made about them. (See the section on Does article 22 apply to our use of profiling tools? for more information.)

Putting in place appeals mechanisms can also support you in upholding your users’ right to rectification. (See the section on What data protection rights do people have? for more information.) 

Our guidance on AI and data protection provides further information about fairness, bias and discrimination in AI.   

How should we tell people what we’re doing?

Data protection law requires you to inform people that you are processing their personal information. 

This is particularly important in using profiling tools because users may not appreciate the scale of information you hold and analyse about them and what insights you can make from that information.  

Transparency is closely linked to fairness. You are unlikely to be treating users fairly if you do not tell them about how you use their personal information.

Note that this section focuses on your transparency requirements where you are collecting personal information directly from the user (outlined in article 13 of UK GDPR). This is likely to be the case for your use of personal information in your trust and safety profiling tools. However, if you use information that is not obtained directly from the user (eg information about a user you obtained from another service or organisation), your transparency requirements differ slightly. (You can find more information on your transparency requirements under data protection law in our guidance on the right to be informed.)

If you are using profiling tools in your trust and safety systems, article 13 of the UK GDPR means you must tell users about the data processing these tools involve, including:

  • that you process their personal information in your profiling tools;
  • why you are processing their personal information in your tools;
  • what lawful basis you are relying on;
  • whether you keep personal information used or generated by your systems and for how long;
  • whether you share any of their personal information with other organisations (such as third-party providers); 
  • information about the user’s data protection rights; and
  • information about the existence of automated decisions that have a legal or similarly significant effect on users, including those based on profiling. 

You must provide this information:

  • when you first start using personal information in your tools; and
  • in a way that is accessible and easy to understand (this is particularly important if your service is likely to be accessed by children - see our children’s code for more information).

The information outlined above is just a starting point. Given that profiling tools are often highly intrusive in their nature, you should consider whether it is appropriate for you to provide further transparency information about the processing involved in your tools, beyond the minimum requirements set out in article 13 of the UK GDPR. This helps you to align your processing with the principle of transparency and fairness.

For example, you could consider whether you tell users about the types of decisions you make with your tools, or what types of automated tools you are using.

If you are making solely automated decisions about users based on profiling and these decisions have a legal or similarly significant effect on them, you must provide additional transparency information related to those decisions. (See the section on Does article 22 apply to our use of profiling tools? for more information.) 

How do we consider what additional transparency may be appropriate? 

For the use of profiling tools in a trust and safety setting, this requires careful consideration. This is because you are likely to need to strike a balance between the level of additional transparency you provide to users about your profiling tools and the risk of malicious users circumventing your tools and evading detection. 

You should take a proportionate approach to considering the additional transparency you provide, taking into account your specific circumstances and the context you are using profiling tools in. As part of this, you should consider: 

  • the nature and purpose of your processing, including whether people are likely to expect it or find it intrusive;
  • the types of harms that could occur as a result of your processing, as well as any benefits;
  • the impacts and risks associated with the transparency information you provide in the trust and safety context, including any risks of malicious users circumventing your tools.  

How do we provide our privacy information? 

You should use a variety of techniques to communicate privacy information.

For example, you could provide:

  • information on a dedicated area of your website or app;
  • information to users when they sign up for your service, to help them make an informed decision about whether they want to use your service;
  • a dashboard where users can access information about how you use their information and can manage access to it;
  • reminders to users about how your tools use their personal information at different stages of the user journey, such as when users access certain features of your service;
  • pop-up notices to users when they update their details on your service; and
  • notifications to users to explain any moderation action you may have taken based on profiling, and why.

You should review your privacy information regularly. If you intend to further process users’ personal information for a new purpose, you must update your privacy information to inform users of this before starting any new processing. (See the section on How do we define our purposes? for more information.)

Example

A video gaming service implements a bot detection tool that carries out profiling. The tool analyses users’ content and activity on the service. Users that are identified as being bots are banned from the service. 

The service does not tell users about the bot detection tool and its purpose. It does not tell users that their personal information is processed in the tool, nor the tool’s potential consequences.

In this case, the use of this tool is unlikely to comply with the fairness and transparency principles of data protection law. This is because the service is not informing users about its collection and use of their personal information and is processing users’ information in a way that they would not reasonably expect and may have unjustified adverse effects on them.  

Compliance with the fairness and transparency principles is more likely if the service informs users about the tool and the personal information processing it involves. For example, it might do this by updating its privacy policy to provide, among other things:

  • information about the purpose of the tool;
  • the consequences of users being flagged as a bot;
  • the lawful basis it is relying on for the processing; and
  • the data protection rights users have about the processing.

Example

A social media company deploys a scamming behaviour detection tool. 

In its privacy notice, the service explains that it uses a profiling tool to detect scamming behaviours and that the tool uses users’ personal information as input data. 

The service also tells users: 

  • what lawful basis it is relying on for the processing; 
  • how long it keeps the personal information used and generated by the tool; and
  • how users can exercise their data protection rights. 

The service acknowledges the intrusive nature of the processing undertaken by the tool and considers what additional transparency might be appropriate to provide, beyond the minimum requirements outlined in data protection law. 

To inform this decision, the service considers: 

  • its existing trust and safety measures;
  • the expectations of its users;
  • the benefits and harms of the processing undertaken by its tool; and
  • the risks associated with providing additional transparency information. 

Given its circumstances, the service decides to provide information about decisions about users that it takes using its scamming detection tool and the type of automated technology the tool uses.

In this case, the service is more likely to meet the transparency and fairness requirements of data protection law.