Why we need to act
Trust in AI and biometric technologies depends on responsible innovation that safeguards people’s rights. There are two significant challenges to ensuring this is the case:
Firstly, private and public sector organisations can lack the regulatory certainty and confidence to invest in and use AI and biometric technologies compliantly.
Concerns about data protection and privacy regulation can be a barrier to AI adoption. A 2024 Bank of England and FCA survey of 118 firms found that these concerns can be seen as a leading constraint to the adoption of AI in the financial services sector.6
In the public sector, 56% of government bodies surveyed by the National Audit Office cited privacy, data protection and cyber security as key barriers, with some struggling to navigate existing guidance.7
The Biometrics Institute’s 2024 annual industry survey found that 58% of respondents viewed privacy and data protection concerns as the main obstacle to market growth.8
Secondly, a lack of transparency and confidence about how personal information is used in these technologies can undermine public trust.
Government research in 2024 found that public perceptions of AI are dominated by concerns, particularly among the digitally disengaged, with 91% of this population seeing decisions made without human input as a major risk.9 DRCF research from the same year found only moderate public trust in generative AI outputs.10
ICO research with Ada Lovelace Institute and Hopkins Van Mil from 2022 highlighted how there can be little awareness among the public on how biometric technology is used or regulated.11 The importance of organisations being transparent about where they use biometrics and what information is processed was also emphasised.
These challenges need addressing if the benefits of AI and biometric technologies are to be fully realised.
We’ve said before, if people don’t trust a technology, then they’re less likely to use it or agree to their own information being used to power it.12 This will hamper innovation in the process. We will support responsible use of AI and biometric technologies to help ensure this isn’t the case in the UK. Wherever personal information is processed by these technologies, clear, proportionate and robust standards of data protection will apply to prevent harm and promote trust.
6 Artificial intelligence in UK financial services, Bank of England and Financial Conduct Authority, 2024.
7 Use of artificial intelligence in government, National Audit Office, 2024.
8 Industry Survey, Biometrics Institute 2024.
9 Public attitudes to data and AI, DSIT, 2024.
10 Understanding consumer use of generative AI, DRCF, 2025.
11 Listening to the public, ICO, Ada Lovelace Institute and Hopkins Van Mil, 2023
12 John Edwards speaks at TechUK Digital Ethics Summit, ICO, 2023.