Skip to main content

How do we demonstrate our compliance with our data protection obligations?

Contents

At a glance

  • You must comply with data protection law when you use biometric data because it is a type of personal information.
  • You must take a data protection by design approach when putting in place biometric recognition systems.
  • You must complete a DPIA before you use a biometric recognition system.
  • You must assess what impact your use of a biometric recognition system will have on the people whose information it will process (and wider society in some cases).
  • You must be clear when you are a controller for biometric data, and if you are a joint controller with others.
  • You must have a written contract in place with all processors.

In detail

You must comply with data protection law when you use biometric data as it is a type of personal information. This means you must demonstrate how you comply with the data protection principles.

If you are using a biometric recognition system, you are processing biometric data to uniquely identify someone. This means you are processing special category biometric data.

Processing special category information requires greater care and consideration as it is more sensitive or private than other kinds of personal information. This requirement is reflected in data protection law which prohibits you using special category information without a valid condition for processing it.

Who is the controller for our biometric recognition system?

Your use of a biometric system may involve several different organisations.

You must be clear about:

  • when you are a controller (with the system provider as your processor); or
  • whether at any stage you might be a joint controller with another organisation.

To assess whether you are a controller or a processor, you must consider who is determining the purposes and means of the specific processing.

If you and another organisation are joint controllers, both of you must ensure that people are able to exercise their rights and understand what you are doing with their information.

Whenever you, as a controller, use a processor to process personal information on your behalf, the processor must only process the personal information in line with your written instruction.

You must have a contract or other legal act in place. You must specify in your contract that the processor should only use the collected biometric data under your instruction.

If the processor uses this biometric data outside your instruction, it is using this information for its own purposes and is therefore considered a controller for these purposes.

If you use a provider that wants to use personal information for their own purposes, then this processing activity won't be something they're doing on your behalf or under your instruction. This means that they are likely to be a controller for this processing.

If you decide to share personal information with your provider, then you must consider and justify your reasons. You must also be clear about what data you intend to share with your provider, and what status this has under data protection law.

As a provider, you must also ensure that your own processing complies with the law.

Further reading

Data protection by design and default

Your data protection obligations start at the point you decide to use personal information; before you begin to process it.

You must adopt a data protection by design and default approach. This means you must consider data protection and privacy issues upfront at the design stage and throughout the lifecycle of your system.

Data protection by design and default means that you must put in place appropriate technical and organisational measures to implement the data protection principles effectively and safeguard people's rights.

You must only use providers who can provide sufficient guarantees of the measures they will use for data protection by design. For example, some biometric recognition systems are designed in ways that reduce the risks associated with personal data breaches of biometric data. See Risks resulting from personal data breaches for more information.

Your choice of biometric recognition system can help to demonstrate your compliance with the data protection by design and default principle. You should document your choice and the rationale for it in a DPIA.

When thinking about how your plans to use biometric data will comply with data protection law, you must ask yourself the following questions at the initial planning stage:

  • Is it necessary for us to use biometric data (ie can we prove that using biometric data is a targeted and proportionate way to meet our needs)?
  • What alternatives to biometric data have we considered?
  • Could any of these reasonably meet our needs in a less intrusive way?

Do we need to do a DPIA?

You must consider whether your processing is likely to result in high risk to people’s rights and freedoms. If you identify such risks, you must complete a DPIA.

A DPIA will help you to work out the likelihood and impact of those risks, and to demonstrate your compliance with data protection law.

We have published a list of examples of high-risk processing operations.

It is highly likely that you will trigger the requirement to complete a DPIA if you are using biometric recognition systems.

This is because data protection law says that you must do a DPIA if you plan to:

  • process special category information on a large scale; or
  • undertake systematic monitoring of a publicly accessible area on a large scale.

Most uses of biometric recognition systems involve one of these criteria.

And, even if you don’t use special category biometric data, you may assess that your proposal to use biometric data is still likely to result in high risk, due to the context and purpose.

If you do not do a DPIA, you must still consider the likelihood and potential impact of specific risks that may occur, and the potential for harm that may result.

To do this effectively, you should understand how your chosen system works and what its capabilities are. You may need specialist expertise, including from any providers you’re considering.

You must also understand the AI supply chain involved in the biometric system you are using or developing, including potential third parties, and consider what risks may be involved.

You should also consider whether the system you intend to use involves privacy enhancing technologies (PETs) or whether you can deploy these alongside it. PETs can support you in meeting your data protection obligations. For example, by limiting the amount of personal information you use, or by providing appropriate technical and organisational measures to protect it.

See Can PETs help us comply with our data protection obligations? for more information.

If you are a provider of biometric systems, as part of your obligations to assist a controller, you should explain how your system works. You must also ensure that users understand the relevant data protection requirements associated with using your system. For example, how the system observes the security, data minimisation and storage limitation principles.

What risks to rights and freedoms should we consider?

In the context of biometric recognition, there are specific risks to people’s rights and freedoms that you should consider. This is not intended to be a comprehensive guide to all these risks. See the further reading box at the end of this section for more resources.

Risks resulting from personal data breaches

A personal data breach is a security incident that has affected the confidentiality, integrity or availability of personal information. If you don’t address a data breach appropriately, it can lead to a several kinds of harms. This includes those resulting from someone losing control of their personal information, psychological harms, and financial harms.

The severity of harms caused by biometric data breaches may be greater than with other types of personal information due to its sensitive nature. Biometric data represents key features of a person’s physical identity that can’t easily be changed (eg facial features, eye shape, the sound of their voice). Biometric data breaches can result in an indefinite (depending on which characteristics are processed) loss of control of personal information if biometric data is not appropriately protected. See Can PETs help us comply with our data protection obligations? for further information about appropriate protection of biometric data.

In addition, the properties of biometric data mean that it can pose specific risks to people.

The first is linkage. Biometric data is a unique identifier. It can, in theory, act as a link across multiple databases on which people’s biometric data are stored. This could allow anyone with unauthorised access to someone’s biometric data to learn more about them. This risks a serious intrusion of their privacy and further loss of control of their personal information.

The second is reverse engineering. Biometric data can sometimes be reversed to work out what the original biometric sample (and the person in it) look like. This process could be made easier if the original biometric samples (in addition to templates) are retained and also compromised.

You should assess the likelihood and potential impact of reverse engineering and linkage for your use-case. You should address the likelihood and impact of these risks through system design and approaches like privacy-enhancing technologies.

You must also decide how long it is necessary to keep biometric data for and have clear retention periods. You should consider if it is necessary for you to keep original biometric samples once you have generated templates. If you do decide to keep these samples, you must protect and store them appropriately.

Biometric data breaches can lead to identity theft that can be very hard to identify as fraudulent. This may result in financial harms, in addition to continued fraudulent access to services or further sensitive information – anything that is protected by someone’s biometric data.

Risks resulting from biometric false acceptance or rejection

Biometric recognition systems make a statistically informed judgement about whether a new input (biometric probe) and a biometric reference are sufficiently similar to each other. This determines how likely the biometric recognition system is to suggest that the two belong to the same person.

See How do biometric recognition systems work? for more information about why biometric recognition relies on probability.

The fact that the comparison process relies on probability introduces the potential for the following types of errors:

  • false biometric acceptance is a false positive (or type I) error specific to biometric recognition systems. The rate at which these occur is sometimes described as the false match rate (FMR). False biometric acceptance occurs when a system incorrectly observes a case as positive when it shouldn’t (ie a match is suggested, but the biometric probe and biometric reference do not belong to the same person).
  • false biometric rejection is a false negative (or type II) error specific to biometric recognition systems. The rate at which these occur is sometimes described as the false non-match rate (FNMR). False biometric rejection occurs when a system incorrectly observes a case as negative when it should be positive (ie the biometric probe and the biometric reference do belong to the same person, but the system did not recognise them).

Some degree of error is unavoidable in biometric recognition systems. However, both false acceptance and rejection can result in harms to people. For example, biometric recognition systems may be responsible for controlling access to important resources like bank accounts. False acceptance could lead to unauthorised people gaining access to sensitive information. False rejection could deny people accessing services or opportunities, which could result in financial harms (eg gig workers unable to access apps that allow them to earn money).

For these reasons, you should monitor the performance of your biometric system and the rates of false acceptance and rejection. If either of these situations is happening too frequently, people may lose confidence in your system. You may also face problems in complying with the fairness principle, if your use of personal information results in people experiencing unjustified adverse effects.

See How do we process biometric data fairly? for further information.

Risk of discrimination

False acceptance and rejection are also central to the issue of bias in biometric recognition.

When algorithmic errors are systematic and repeatable, they become biases. Bias is not unique to automated processes; human decision-making is also prone to bias.

Biometric algorithms are trained on biometric data, which is reliant on people’s characteristics. Differences exist between people and groups in terms of these characteristics. If training datasets are not representative of the context in which biometric recognition technologies are going to be used, they are less likely to be effective in interpreting the characteristics of certain groups. This is likely to result in certain groups being subjected to higher error rates. This can result in a bias in the biometric algorithm.

Biases may also be present in a biometric algorithm due to human biases impacting how variables are measured, labelled or aggregated.

Certain biometric technologies have previously been shown to be biased according to race, gender and age.

Bias in a biometric recognition system can give rise to discrimination. Discrimination is where people or groups are treated unjustly on the grounds of protected characteristics.

Greater rates of false positive or false negative errors when processing biometric data relating to a specific group could result in discrimination against that group. This is particularly problematic if the system is making fully automated decisions about people. The form that the discrimination takes depends on the context in which biometric recognition is deployed. You must ensure your use of biometric data does not result in discrimination.

Biometric recognition systems also have the potential to discriminate in other ways.

Some people may be unable to interact with a biometric recognition system (eg a fingerprint scanner) due to physical disability. This could mean that they are at a significant disadvantage when accessing a specific service, benefit or product, compared to those who can use the biometric solution.

See How do we process biometric data fairly? for more information.

Risks resulting from systematic monitoring of public spaces

Certain biometric recognition systems are capable of monitoring publicly accessible spaces. These systems often use facial recognition.

The use of this technology for overt surveillance introduces the potential for someone’s identity to be determined whenever they enter the field of view of an enabled camera system.

In a practical sense, there is a risk that someone could enter the field of view without being aware that their special category biometric data is being processed.

Without being aware of the processing taking place, people are unable to exercise their right to be informed and their other data protection rights. Even when made aware of the processing, it may be difficult for someone to opt out, if they need to walk through the area under surveillance.

More generally, there are wider concerns that the use of biometric recognition systems in public spaces could result in a ‘chilling effect’. This means people are less likely to exercise rights such as freedom of expression or freedom of assembly.

While some of these risks may be mitigated, many are unavoidable aspects of the technologies themselves. Therefore, you should consider these seriously and weigh them up against any perceived benefits. The specific circumstances of the deployment will determine whether that interference is lawful and justifiable. Unnecessary deployment of these technologies constitutes another type of harm: unwarranted intrusion.