Skip to main content

How do we process biometric data fairly?

Contents

At a glance

  • Biometric recognition systems with poor statistical accuracy can result in challenges to fair processing of biometric data.
  • You must ensure any system you use is sufficiently accurate for your purposes.
  • You should establish a threshold that is appropriate for your circumstances.
  • You should test for and mitigate biases in any system you use.
  • If you are a provider, you should ensure users are aware of any biases.
  • You should consider the real-world risks of false acceptance and rejection rates and put in safeguards to mitigate these risks accordingly.
  • You must ensure that biases do not result in discrimination.

In detail

Identifying a lawful basis for processing does not mean your processing is lawful by default. You must also ensure that your use of biometric systems is lawful more generally.

For processing to be fair, you must use information in ways that people would reasonably expect and that do not have unjustified adverse effects on them. Also, you must not mislead anyone when obtaining their biometric data.

How does the statistical accuracy of biometric algorithms relate to the fairness principle?

The effectiveness of a biometric recognition system (its ability to identify people correctly) is a measure of its statistical accuracy. If systems routinely fail to identify people correctly, these errors could have a real-world impact.

If you don’t address this risk of inaccuracy, you could contravene the fairness principle and other equalities legislation. This may leave you exposed to further legal claims, as well as regulatory action.

How do we deal with risks resulting from errors?

Because the comparison process relies on probability, you should not interpret the decision of a biometric recognition system as an objective fact about someone’s identity. Instead, you should see it as an indicator of the confidence you can have in asserting someone’s identity.

Example

Traditional verification methods (eg a password) make a simple comparison between an input value (what you type) and a stored value (the password). If the input exactly matches the stored value, then access is granted.

This is a binary outcome – either the input value will match the stored password, or it won’t.

Biometric recognition systems work in a different way.

Although their objective is the same, there are a range of factors which mean that no two captures of biometric data can be truly identical in the same way as a password.

Differences in environmental conditions (such as the amount of light or glare) may mean that an input value (eg an image of a face) doesn’t precisely match the stored value (ie an image of their face when they first enrolled on the system).

Other issues can complicate the matching process, such as the passage of time between the original enrolment and the later re-presentation.

You can never entirely eliminate errors from biometric recognition systems. You must consider the potential impact such errors could have on people in real life. Your context also determines your tolerance for these errors, and what trade-offs in performance you can justify.

In a biometric recognition system, you can reflect these trade-offs in your choice for the system’s threshold – the point at which the result of a comparison is considered statistically significant.

A more sensitive system has a lower threshold: this increases your likelihood of finding a ‘match,’ but it is more likely that the system will return false positives.

A more precise system has a higher threshold: this decreases your likelihood of finding false matches, but it increases the chance that you miss a true positive match.

You should use a threshold that is appropriate for your circumstances. This depends on:

  • what impact the decision could have on someone;
  • what your use case and context is;
  • what safeguards you have in place; and
  • whether the system is making solely automated decisions.

As a result, you should not assume that a biometric recognition system always offers the best performance in your scenario when left on any default threshold settings. You should understand whether you can configure your system locally (eg to optimise performance and reduce errors to an acceptable level).

If you are a provider, you should assist in setting up your systems to help controllers with their obligations.

In the case of biometric identification, the number of references against which you are comparing can be a factor in the rate of false positive errors observed. To help with accuracy, you should therefore keep the size of any reference database as small as possible. This also helps you to comply with the data minimisation principle.

While some error is unavoidable, you should use well-developed systems that minimise the number of errors that could occur.

For example, the National Institute of Standards and Technology (NIST), an agency of the United States Department of Commerce, tests facial recognition technologies for their error rates. This is not an accreditation process, and all tests take place under laboratory conditions. However, you may find this is a useful resource to:

  • demonstrate that different facial recognition technologies have different error rates; and
  • help you work out whether the service you are considering has a significant observed high error rate compared with other commercially available products.

You should assure yourself of the statistical accuracy of any solution. This includes where any published information on the performance of a solution came from (ie lab testing versus ‘real-life’ scenarios similar to yours).

Before you deploy a biometric recognition system, you must understand the potential implications of these sorts of errors. You should consider this in terms of the possible impact on both the people who will rely on the system, as well as your organisation as a whole.

You should have mechanisms and processes in place to:

  • diagnose any quality issues or errors;
  • record how these errors are resolved;
  • check that your systems are working as intended; and
  • highlight any inaccuracies or bias.

Maintaining a biometric recognition system is key to making sure that you process special category biometric data fairly.

You should consider what appropriate safeguards are required to ensure people do not experience real detriment or harm because of an error. This might require introducing a way for people to challenge the outcome, if they feel that it is inaccurate.

Further reading

How do we deal with the risk of discrimination?

You should test all biometric recognition systems for bias. If you detect bias, you should mitigate it.

If you are a provider, you should inform potential users about:

  • bias that exists;
  • the implications of that bias; and
  • how they might mitigate it.

For example, it might be possible to improve bias by training the system further on a dataset that is more representative of the context you will deploy it in.

As part of their testing process, NIST analyse the effect of demographics on the effectiveness of biometric recognition systems. As a controller, this may give you an idea of the levels of observed bias rates and how and why they can vary.

Bias in a biometric recognition system can give rise to discrimination. See Risk of discrimination for more information on the relationship between errors, bias and discrimination.

Discrimination is where people or groups are treated unjustly on the grounds of protected characteristics.

You must ensure that bias in biometric recognition systems does not lead to discrimination.

Biometric systems can also result in discrimination in ways not related to the statistical accuracy of the system.

For example, if a disabled person was unable to use a biometric recognition system controlling access to a room, and there was no other option for them to gain access, this would make your use of biometric recognition technology unlawful. This is both because you would violate the fairness principle, and because disability is a protected characteristic.

If you intend to use a biometric recognition system, as part of complying with the fairness principle, you must assess whether it is likely to have a discriminatory impact on people.

When considering a biometric recognition technology, you should also consider its potential for exclusion based on non-protected characteristics. For example, if your use case requires people to have a smartphone, you may disproportionately be excluding and impacting certain groups. If this happens, you are likely to find it challenging to demonstrate how your system complies with the fairness principle.

Example

Fingerprint recognition is less accurate for adults over 70 and children under 12. This is because older adults’ fingerprints are less distinct and young children’s fingerprints change rapidly because they are still developing.

As a result, a technology that may superficially be considered to be fair can have unfair impacts, as it will systematically perform worse for older adults and young children.