The ICO exists to empower you through information.

The Information Commissioner’s Office (ICO) will today call for businesses to address the privacy risks generative AI can bring before rushing to adopt the technology – with tougher checks on whether organisations are compliant with data protection laws.

New research indicates that generative AI could become a £1 trillion market within a decade, with potential to bring huge benefits to business and society.

Speaking at Politico’s Global Tech Day today, Stephen Almond, Exec Director of Regulatory Risk, will call for businesses to see those opportunities – but to see too the risks that come with them.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks.

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won't upset customers or regulators.”

- Stephen Almond, Exec Director of Regulatory Risk

Generative AI creates content after collecting or querying huge volumes of information from publicly accessible sources online, including people’s personal information. Laws already exist to protect people’s rights, including privacy, and apply to generative AI as an emerging technology.

In April, the ICO set out eight questions organisations developing or using generative AI that processes personal data need to be asking themselves. The regulator also committed to acting where organisations are not following the law.

Stephen Almond will today say:

“We will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data. There can be no excuse for ignoring risks to people’s rights and freedoms before rollout.

“Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different question compared with one for a sexual health clinic, for instance.”

Stephen Almond will be speaking at a panel discussion on Generative AI at a panel discussion a panel discussion on Generative AI at Politico’s Global Tech Day on Thursday 15 June, as part of London Tech Week 2023.

The ICO is committed to supporting UK businesses to develop and innovate with new technologies that respect people’s privacy. Our recently updated Guidance on AI and Data Protection provides a roadmap to data protection compliance for developers and users of generative AI. Our accompanying risk toolkit helps organisations looking to identify and mitigate data protection risks.

Innovators identifying novel data protection questions can get advice from us through our Regulatory Sandbox and new Innovation Advice service. Building on this offer, we are in the process of piloting a Multi-Agency Advice Service for digital innovators needing joined up advice from multiple regulators with our partners in the Digital Regulation Cooperation Forum.

Notes to editors
  1. The ICO has specific responsibilities set out in the Data Protection Act 2018 (DPA2018), the United Kingdom General Data Protection Regulation (UK GDPR), the Freedom of Information Act 2000 (FOIA), Environmental Information Regulations 2004 (EIR), Privacy and Electronic Communications Regulations 2003 (PECR) and a further five acts and regulations. 
  2. The ICO can take action to address and change the behaviour of organisations and individuals that collect, use and keep personal information. This includes criminal prosecution, non-criminal enforcement and audit. 
  3. To report a concern to the ICO telephone our helpline 0303 123 1113 or go to ico.org.uk/concerns.