Does article 22 apply to our use of profiling tools?
-
Due to the Data (Use and Access) Act coming into law on 19 June 2025, this guidance is under review and may be subject to change. The Plans for new and updated guidance page will tell you about which guidance will be updated and when this will happen.
In detail
- Why is this important?
- When are we taking solely automated decisions about users?
- When does our use of profiling have a legal or similarly significant effect?
- What do we need to do when article 22 applies to our use of profiling?
- What about special category information?
- What if article 22 does not apply?
Why is this important?
The UK GDPR restricts the circumstances in which you can take solely automated decisions that have legal or similarly significant effect on users, including those based on profiling.
You must determine whether your profiling tools involve these types of decisions.
When are we taking solely automated decisions about users?
‘Solely automated decisions’ are those taken without any meaningful human involvement.
For example, using a profiling tool to infer things about a user’s behaviour on your service, and automatically carry out a moderation action on that user based on those inferences.
Example
A social media service deploys a behaviour identification tool that detects whether a user is likely to be a bot and automatically disables the account of any user that scores above a certain confidence threshold.
The system decides whether the user is classified as a bot account, and if so, disables the account.
As there is no human involvement, this is solely automated.
Not all decisions your trust and safety systems make are solely automated. For example, those based on a combination of outputs (including profiling tools and content moderation systems), that a human then meaningfully weighs up before taking an action.
You should:
- map out the workflows across your trust and safety tools and establish where they involve decision-making processes; and
- understand which of these involve solely automated decisions.
When does our use of profiling have a legal or similarly significant effect?
You must determine whether the solely automated decisions made by your profiling tools will have a legal or similarly significant effect on your users.
A ‘legal effect’ is something that affects someone’s legal status or their legal rights. A ‘similarly significant effect’ is something that has an equivalent impact on someone’s circumstances, behaviour or choices.
Examples of legal and similarly significant effects include decisions that:
- affect someone’s financial circumstances;
- lead to someone being excluded or discriminated against; or
- significantly impact a user’s behaviour or choices.
The impact your solely automated decisions have can depend on:
- the nature of your service;
- the user you make a decision about; and
- how they use your service.
For some users, an automated decision such as a service ban may not have a legal or similarly significant effect. But for others, it may. For example, if it results in financial loss or discrimination. Understanding the full context in which automated decisions take place helps you identify whether article 22 applies.
While the context is key to determining the impact on users, there are certain actions that generally have less of an impact on users than others. For example, profiling tools that you use to ‘nudge’ users or prompt them to re-consider their behaviour are less likely to have a legal or similarly significant effect than banning a user from your service or reporting them to a law enforcement organisation.
See our guidance on automated decision-making and profiling for more information about what types of decision have a legal or similarly significant effect. Where your decisions have this effect, article 22 applies. This means you must consider which exception will apply, as otherwise you cannot carry out the decisions lawfully.
What do we need to do when article 22 applies to our use of profiling?
If article 22 applies to your profiling tools, you must only undertake the processing if you can identify an exception that applies to you. You must also provide users with transparency about the decisions you make, and apply appropriate safeguards.
The following sections discuss these requirements in more detail.
Consider what exceptions apply
Article 22 means you must only take solely automated decisions that have legal or similarly significant effects if they are:
- authorised by domestic law;
- necessary for a contract; or
- based on a person’s explicit consent.
Authorised by domestic law
This exception applies where domestic law (including the OSA and accompanying codes of practice) authorises solely automated decision-making with legal or similarly significant effects. It is your responsibility to determine whether this exception applies.
The use of automated decision-making does not need to be explicitly referenced in a piece of legislation for it to be authorised by law. If you have a statutory or common law power to do something and solely automated decision-making is the most appropriate way to do so, you may be able to rely on the authorised by law exception. However, you must be able to show that it’s reasonable and proportionate to use automated decision-making in your circumstances.
You should document and justify which part of the legislation authorises your use of solely automated decision-making.
If you're carrying out solely automated decision-making under this exception, you must also comply with the requirements of section 14 of the DPA 2018. This means you must:
- tell people as soon as reasonably practicable that they have been subject to a solely automated decision; and
- be prepared to act on any request they may make for you to reconsider the decision, or take a new one that's not solely automated.
If someone does request that you reconsider, you must also:
- consider the request and any other relevant information the person provides;
- comply with the request; and
- inform the person, in writing, about the steps you've taken and the outcome.
When the decision is necessary for a contract
This exception may apply if you’re carrying out solely automated decision-making that’s necessary to perform the contract between you and your users. For example, if you’re using solely automated decision-making to enforce the terms of service that your users sign up for.
You must ensure that your processing is objectively necessary for the performance of the contract. This doesn’t mean it must be absolutely essential, but it must be more than just useful.
Where the decision is based on someone’s explicit consent
This exception applies if you have explicit consent from someone to carry out solely automated decision-making based on their personal information.
It is unlikely that this exception applies to the profiling activities you undertake in your trust and safety systems because consent is unlikely to be freely given in this context.
For example, you’re unlikely to be offering people a genuine free choice about whether you undertake solely automated decision-making about them in your profiling tools if refusing consent means that they are not permitted to access your service.
In addition, it may be impractical for you to gather explicit consent from users.
Provide users with transparency about decisions
If your profiling involves solely automated decision-making with legal or similarly significant effects, you must proactively tell your users about this at the point at which you collect their personal information. You must:
- say that you’re making these types of decisions;
- give them meaningful information about the logic involved in any decisions your system makes; and
- tell them about the significance and envisaged consequences the decisions may have.
Although the processing undertaken to reach the decision may be complex, you must ensure the information you provide to users is meaningful. You should use simple, understandable terms to explain the rationale behind your decisions.
The OSA requires regulated services to set out in their terms of service if they are using ‘proactive technology’ to comply with their online safety duties. Services are also required to explain the kind of proactive technology they use, when they use it, and how it works. Complying with this duty may help you provide the transparency to users that UK GDPR requires. However, you must also provide the necessary transparency for data protection law. For example, where article 22 applies, you must provide users with information about the significance and envisaged consequences of the solely automated decisions you take about them.
Implement appropriate safeguards
If you're relying on the authorised by law exception, you must implement the safeguards set out in the earlier section on the ‘authorised by law’ exception.
If you’re relying on the contract or explicit consent exceptions, you must implement appropriate safeguards to protect people’s rights, freedoms and legitimate interests. At a minimum, you must ensure this includes enabling people to:
- obtain human intervention;
- express their point of view; and
- contest the decision.
Under the OSA, all user-to-user services have a duty to operate complaints processes, including for users who have generated, uploaded or shared content on the service. Among other things, they should be allowed to complain if:
- their content is taken down (or if they are given a warning, suspended or banned from using the service) on the basis that their content is illegal; and
- regardless of whether the service deems the content to be illegal, the use of proactive technology results in content being taken down, restricted or deprioritised in a way that is inconsistent with the service’s published terms of service.
There are also complaints obligations for services likely to be accessed by children and category 1 services.
Where you are complying with your OSA complaints duties, this may help you provide the safeguards that article 22 of the UK GDPR requires (eg the right for users to contest the decisions). However, you must make sure that you implement all the appropriate safeguards for data protection law.
What about special category information?
You must also consider whether your solely automated decision-making is likely to involve special category information.
You must not base your decisions on special category information unless you:
- have explicit consent; or
- can meet the substantial public interest condition in article 9.
In addition, you must implement safeguards to protect users’ rights and freedoms, and legitimate interests.
As noted above, you are unlikely to seek explicit consent for the profiling you undertake in your trust and safety systems.
This means that you must be able to meet the substantial public interest condition if you are processing special category information under article 22, based on either the necessary for contract or authorised by domestic law exceptions. (See the section on What if our profiling involves special category information? for more information about the substantial public interest condition.)
Further reading
What if article 22 does not apply?
If article 22 doesn’t apply to your profiling tools, you can continue to carry out profiling and automated decision-making. However, you must still comply with data protection law and ensure that users can exercise their rights.
This means you must identify a lawful basis for your processing and ensure your use of data is fair and transparent. (See the sections on How do we use profiling tools lawfully? and How do we use profiling tools fairly and transparently? for more information.)
Depending on your circumstances, you could tell people about any automated decision-making your content moderation involves, even if it has meaningful human involvement.
Further reading