Where we will focus
AI and biometric technologies are already in use across a wide range of contexts – from law enforcement, to education, to healthcare. While our regulatory interest spans the full breadth of these developments, this strategy targets three priority situations where:
- the stakes for people are high;
- public concern is clear; and
- regulatory clarity can have the most immediate impact.
These are high-impact cases with significant potential for public benefit, but they also concentrate many of the risks people care most about:
- the development of foundation models — large-scale models trained on vast datasets and adaptable to a wide range of downstream tasks;
- the use of automated decision-making in recruitment and public services; and
- the use of facial recognition technology by police forces.
By raising standards here, we aim to create clear regulatory expectations and scalable good practice that will influence the wider AI and biometrics landscape.
Within these cases, we consistently see public concern forming around three cross-cutting issues:
- transparency and explainability;
- bias and discrimination; and
- rights and redress.
Transparency and explainability
People expect to understand when and how AI systems affect them. But across generative AI, ADM and FRT, the picture is often unclear.
In recruitment, people want to know when organisations use automated tools and how they make decisions.
In generative AI, users call for greater clarity on how organisations develop and train tools.
And in policing, concerns about fairness and privacy are closely linked to how well they can explain and justify FRT decisions.
What the evidence shows:
Generative AI: 26% of users cite data protection as a concern; 15% are concerned about transparency in how tools are developed; and 14% highlight a lack of clarity around sources and results.
- Understanding consumer use of generative AI, DRCF, 2025
Automated decision-making: People expect to be informed when ADM is used in recruitment, and want clarity on the information and logic behind decisions:
“You should be told before you apply that [ADM] is being used so you can make an informed decision whether you want to continue your application.”
− Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025
Facial recognition technology: People who have concerns about civil liberties, privacy and a lack of transparency are less comfortable with police use of FRT.
"It just depends on who's using it [live facial recognition] and for what reason… If it's just random or wherever they feel like it, then that doesn't feel fair."
− Research participant, Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025
Bias and discrimination
People are concerned that AI systems can replicate or amplify bias, particularly when trained on flawed, incomplete or unrepresentative information.
In recruitment, people fear that automated tools may reflect and reinforce social inequalities, particularly in how candidates are ranked or filtered.
In generative AI, users have observed biased outputs and expect developers to take visible action to address them.
In facial recognition, public trust is shaped by whether systems perform consistently across different demographic groups. Independent testing of facial recognition algorithms demonstrates differential performance according to gender and ethnicity.13
What the evidence shows:
Generative AI: Around 10% of users report observing bias in outputs. Public confidence is strongly linked to active steps taken by developers to address bias, which ranks among the top five drivers of trust.
− Understanding consumer use of generative AI, DRCF, 2025
Automated decision-making: While people can see benefits from ADM, such as increased efficiency, there are also concerns that ADM systems in recruitment may reinforce demographic bias.
“Depending on what information it's been trained on, there’s a very, very high potential for bias, particularly if you look at the sort of biases that have already been introduced into AI… they still tend to quite like young, white men.”
− Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025
Facial recognition technology: 49% of people believe FRT systems show bias against certain groups, including on the basis of gender or ethnicity.
“The problem I would see would be if it has been trained on predominantly white rather than interracial or different coloured faces.”
− Research participant, Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025
Risks of bias and discrimination are particularly acute in speculative AI systems that attempt to infer traits, emotions or intentions from physical or behavioural characteristics — such as emotion recognition tools used in recruitment. Research shows that the science underpinning these systems is highly contested and that their use has the potential to lead to harms.14
Rights and redress
Public confidence in AI and biometric technologies is shaped most powerfully by what happens when things go wrong.
People are concerned about the serious consequences of inaccurate outputs and unfair outcomes, for example:
- being wrongly identified by facial recognition;
- being overlooked for a job due to flawed automation; or
- losing access to benefits through error-prone ADM.
In generative AI, models can reproduce sensitive personal information, including medical details or child sexual abuse material.15 These are not abstract risks — they affect lives and livelihoods.
People want to know that:
- systems are accurate;
- safeguards are in place; and
- there are clear ways to challenge and correct outcomes when harm occurs.
What the evidence shows:
Generative AI: 70% of people believe generative AI developers must do more to prevent tools creating harmful content. 58% of people feel uncomfortable about their personal information being used to train these models.
− Understanding consumer use of Generative AI, DRCF, 2025
Automated decision-making: Concerns grow as ADM is used more extensively in recruitment, with fully automated decisions seen as impersonal and exclusionary.
“It can be exclusionary, could be unfair to someone with a neurological issue, for example… it can just rule out a lot of people.”
− Research participant, Understanding public perceptions towards automated decision-making in recruitment, ICO, 2025
Facial recognition technology: Trust in police use of FRT depends heavily on perceived accuracy. 83% of people who believe the technology is accurate are comfortable with its use; only 30% are comfortable if they believe it is not.
− Understanding the UK public’s views and experiences of biometric technologies, ICO, 2025
As AI systems become more agentic, the ability to trace decisions back to a human controller becomes harder — and people may struggle to understand how decisions were made or how to challenge them. This creates real risks around accountability and redress.
13 Face recognition technology evaluation, NIST, 2025.
14 Biometrics: foresight, ICO, 2022.
15 Viewing generative AI and children’s safety in the round, NSPCC, 2025.