Skip to main content

Facial Recognition Technology (FRT) and surveillance

Contents

Checklist

☐ We have conducted a Data Protection Impact Assessment (DPIA) that fully addresses our need to use Facial Recognition Technology (FRT), the lawful basis for its use and explores the impacts on the rights and freedoms of individuals whose personal data are captured for every deployment.

☐ We fully document our justification for the use of FRT, and the decision-making behind these justifications, and they are available on request.

☐ We have ensured that a sufficient volume and variety of training data has been included to assist accurate performance.

☐ We have chosen an appropriate resolution for the cameras we use, and we have carried out full testing of the equipment.

☐ We have positioned our cameras in areas with sufficient lighting, to ensure good quality images are taken to assist accurate recognition.

☐ We are able to clearly identify false matches, and true matches.

☐ We are able to record false positive or false negative rates where appropriate.

☐ We are able to amend the system to correct false positive or false negative rates that are considered to be too high.

☐ We ensure any watchlists we use are constructed in a way that is compliant with data protection law.

☐ We have considered whether an Equalities Impact Assessment (EIA) is required to fulfil our obligations under the Equalities Act 2010.

☐ We comply with the Surveillance Camera code of practice where required.

What is facial recognition technology?

Facial recognition technology identifies or otherwise recognises a person from a digital facial image. Cameras are used to capture these images and facial recognition software measures and analyses facial features to produce a biometric template. This typically enables the user to identify, authenticate or verify, or categorise individuals. Often, the software which incorporates elements of artificial intelligence (AI), algorithms and machine learning processes estimates the degree of similarity between two facial templates to identify a match. For example, to verify someone’s identity, or to place a template in a particular category (eg age group).

FRT can be used in variety of contexts from unlocking our mobile phones, to setting up a bank account online, or passing through passport control. It can help make aspects of our lives easier, more efficient and more secure.

The concept may also be referred to using terms such as automatic or automated facial recognition (AFR) or live facial recognition (LFR) which is a type of FRT that is often used in public spaces in real time.

Depending on the use FRT involves processing personal data, biometric data and, in the vast majority of cases seen by the ICO, special category personal data. Biometric data is a particular type of data that has a specific definition in data protection law.

Biometric data”, in particular, means personal data resulting from specific technical processing relating to the physical, physio-logical or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic (fingerprint) data, as defined at Article 4(14) UK GDPR.

Under the UK GDPR, processing biometric data for the purpose(s) of uniquely identifying an individual is prohibited unless a lawful basis under Article 6 and a condition in Article 9 can be satisfied. Five of the conditions for processing are provided solely in Article 9 of the UK GDPR. The other five require authorisation or a basis in UK law. This means you need to meet additional conditions set out in section 10 and Schedule 1 of the DPA 2018, depending on the Article 9 condition relied upon. Read further guidance about special category data.

Further detailed information can be found in the Information Commissioner’s published Opinion about the use of facial recognition technology in public spaces.

How does facial recognition in public spaces work?

Common uses of FRT such as unlocking our mobile phones, typically involve a “one-to-one” process. This means that the individual participates directly and is aware of why and how you are using their data. Live facial recognition in public spaces is different, and is typically deployed in a similar way to traditional CCTV. This means it is directed towards whole spaces rather than specific individuals. In its most simple form, the face of an individual is scanned and cross-referenced with images from a ‘watchlist’ in order for you to determine a match. This is a bespoke gallery of individuals that could include authorised visitors, people banned from a particular premises, or in some cases a wanted criminal.

After a facial match is suggested by the system, human intervention is commonly required to assess whether the match is correct. This enables you to determine the appropriate response. The level of human intervention required can vary based on the use of the system and the risk of harm to individuals. For example, meaningful human intervention could involve deciding whether to stop an individual in a public space. In contrast, for organisations granting physical access into a premises or secure facility, human intervention may only be required to ensure the system works correctly, or allow for a second opinion.

It is likely that most systems will have an element of human decision-making built-in the process. But Article 22 of the UK GDPR establishes stricter conditions about systems that make solely automated-decisions (ie those without any human input). Systems that only support or enhance human decision-making are not subject to these conditions. But you must ensure that any human input to your processing is active, and has a meaningful influence on the outcomes. See our guidance on automated decision making.

Example

A business wishes to trial the use of live facial recognition on a large crowd of people at the entrance to a concert, in order to improve security. The faces of individuals would be scanned at the entrance, and then cross-referenced with a watchlist of persons of interest. A staff member or officer would review and scrutinise the suggested matches from the system, prior to stopping or questioning any individuals.

FRT can also be used retrospectively, in order to identify an individual from old footage or photographs. This is in contrast to using FRT in real-time in order to locate an individual in a live public setting. It is still very important to take the principles of data protection law into consideration. For example, you should ensure that the images you use for such retrospective processing are:

  • obtained lawfully;
  • used for defined and limited purposes;
  • accurate; and
  • not retained for longer than is necessary.

When using FRT and considering your compliance with the data protection principles, it is particularly important that you recognise and understand the potentially intrusive nature of the technology.

In terms of accountability, when using FRT you must be able to provide a clear explanation of:

  • the lawful basis you are relying on;
  • why you consider the use of FRT necessary in the circumstances or in the public interest;
  • why you have ruled out less intrusive options;
  • your assessment of the likelihood that the objectives of FRT (and associated processing) will be met; and
  • how you have measured its effectiveness.

In all sectors, any processing you undertake as a result of deploying FRT is likely to result in a high risk to individuals’ information rights. You should see a DPIA as a living document that you complete, update or review prior to every deployment. This means you are able to demonstrate that you have considered the risks to the rights and freedoms of individuals.

You may also wish to consider whether an Equalities Impact Assessment (EIA) is required to fulfil your obligations under the Equalities Act 2010.

How do we mitigate bias or demographic differentials?

FRT will typically incorporate machine learning and artificial intelligence. These types of systems learn from data, but this does not guarantee that their outputs will be free of discriminatory outcomes. Both developers and controllers should be mindful about the data used to train and test these systems, as well as the way they are designed and used. This is because these factors may cause them to treat certain demographics less favourably, or put them at a relative disadvantage. For example, this may be based on characteristics such as gender, race or ethnicity.

As a controller you should determine and document your approach to bias and demographic differentials from the very beginning of any use of FRT. This means that you can put appropriate safeguards and technical measures in place during the design of the FRT system.

You should also establish clear policies and procedures surrounding the data which you use to train or pilot systems. You should ensure that the data is sufficiently diverse in order to represent the population the FRT system will be used on. In order for an FRT system to be processing personal data accurately, the output of the system should be the best possible match to the facial image in question. However, this can be a significant challenge when we consider:

  • the varying quality of images that can be captured;
  • as well as the capabilities of the algorithm used; and
  • the ways that faces can be obscured or changed.

Your DPIA should explain how you have implemented effective mitigating measures, including matters relating to bias.

Further, before you procure an FRT system, you should engage with manufacturers, vendors and software developers to explore how they have prevented technical bias within their systems. This will help ensure that their products will allow you to comply with the requirements of data protection law.

How do we determine if FRT is necessary and proportionate?

It is not possible to provide an exhaustive list of all scenarios where the processing of personal data by FRT could be regarded as necessary and proportionate. You need to be able to demonstrate and document each case on its own merits. You are expected to clearly articulate the lawful use of FRT systems as part of a DPIA and “appropriate policy document”, where required by the UK GDPR and DPA 2018. This applies to the use of FRT in both public and private sectors.

Further detailed information, particularly about necessity and proportionality, can be found in the Information Commissioner’s published Opinion about the use of facial recognition technology in public spaces.

The context in which you are using such systems is a key consideration when you determine whether your use of FRT is appropriate. For example in shopping centres, you still need to be able to strongly justify that your use of FRT is lawful and necessary to achieve your outcome, and that you could not do so using less privacy intrusive methods. It may be more difficult for you to justify processing images of large numbers of individuals to only identify a few, where the need to do so or public interest is not justifiable or realistic.

If you are relying on consent to use FRT, for that consent to be valid you must ensure that you give individuals a fully informed and freely given choice whether or not to be subject to such processing. In practice, consent could prove very difficult to obtain especially in circumstances where you are using FRT on multiple individuals in public spaces. In cases where you cannot obtain appropriate consent, you must identify an alternative lawful basis to use the system on individuals.

You must also ensure that the use of FRT does not lead to individuals suffering detriment. So, for the use of FRT for authentication purposes, one example is to provide an alternative way for individuals to use a service if they do not wish to participate or consent to facial recognition processing. This could involve individuals using a unique key code or an alternative route to enter a premises.

Example

A gym introduces a facial recognition system to allow members access to the facilities. It requires all members to agree to facial recognition as a condition of entry – there is no other way to access the gym. This is not valid consent as the members are not being given a real choice – if they do not consent, they cannot access the gym. Although facial recognition might have some security and convenience benefits, it is not objectively necessary in order to provide access to gym facilities, so consent is not freely given.

However, if the gym provides an alternative, such as a choice between access by facial recognition and access by a membership card, consent could be considered freely given. The gym could rely on explicit consent for processing the biometric facial scans of the members who indicate that they prefer that option.