The ICO exists to empower you through information.

Introduction

In the past year, generative AI has captured the public imagination. Freely available models allow users to generate unique images as well as a wide variety of text-based and other outputs.

They have gained a considerable numbers of users in a short period of time, as well as a flurry of new investment. Generative AI models are increasingly offering users the ability to personalise outputs. 

Generative AI refers to systems which use deep learning and other technologies to generate novel content, usually in response to a prompt provided by a user.36 The systems often use enormous datasets (eg images or texts), some of which may have been scraped from the public web. They use these to learn about the way sentences and images have been constructed. This allows the system to ‘understand’ user enquiries and generate appropriate responses. Responses can subsequently be further improved through user feedback.37

Although open questions remain about possible use cases and the potential for monetisation of these solutions38, the generative AI sector as a whole is seeing considerable investment. Some see the technology as crucial to future economic growth with consultants estimating that AI could generate between $2.6tn to $4.4tn.39

About personalised AI

Various iterations of personalised AI systems already exist and work in a variety of different ways. These systems can offer outputs designed to be useful for specific users, based on information related to the user. For example, a person’s search history, preferences and other information. They may be underpinned by a large language model and then fine-tuned to better suit a user or business.40

Other systems allow users to create bespoke models and outputs principally grounded on the information provided by individual users.41 Organisations or individuals could use this to automate the creation of instant messages or emails.42

Generative AI systems, which allow for a greater degree of personalisation, have recently received increased attention. Leading tech figures have argued that the use of these is likely to grow substantially in the coming years.43

State of development

A variety of services which market themselves as personal AIs are already on the market and in use. Existing products offer users the ability to train a personalised generative AI to respond to emails automatically or semi-automatically. Such models can utilise a user’s own writing style and vocabulary to automate email or instant message responses. These iterations sit alongside other offers of personalised AIs which offer users assistance with tasks, such as shopping and booking travel.44

Over the next few years, the usage of such systems could increase substantially. In particular, it is likely that the abilities of these systems will further improve, allowing users to automate a wider range of personal tasks across a wide range of sectors.45

Likely future use cases include AI systems performing a greater educational role. Online educational platforms are already offering such generative AI powered tutors and similar systems could see wider adoptions in classrooms at all educational levels.46 These systems could offer users a tailored learning experience, one which is based on the personal strengths and weaknesses of an individual student rather than generalised approach. Leading technology figures are citing this possibility as a means of improving educational quality.47

As generative AI systems become increasingly sophisticated, other areas of use may include personalised systems assisting users in creating digital memory banks to assist in battling memory loss.48 AI systems could also act as a personal life coach, offering guidance on how to approach difficult life events, taking into account a user’s unique circumstances.49 Some companies even go as far as to claim they will soon be able to replicate a user’s personality and allow users to speak with clones of themselves.50 Slightly nearer term, we could see personal AI solutions act as sophisticated financial assistants. These could analyse a user’s financial situation and subsequently provide personalised savings and investment advice.51 Some experts go as far as to argue that digital assistants could soon ”potentially do almost anything on the internet”.52

Broadly, personalised generative AI systems could offer a wide range of new and creative use cases. However, wider use will have implications for data protection and privacy.

Fictional future scenario

As part of his secondary schooling, Kwame can interact with a virtual tutor which uses generative AI to provide a personalised tutoring experience. This allows him to be taught at the most educationally appropriate level for his skills and abilities, as the content it provides can be based on his previous experience and ability.  

Kwame can therefore experience a more personalised experience than in a standard classroom. Kwame’s parents sometimes express concerns about transparency and how the system makes decisions about his education. The wider availability of this technology increases the viability of homeschooling for his parents and many others. To offer this personalised experience, the virtual tutor will need to collect personal information about Kwame. This could include information about academic ability. Personalised tutoring systems which offer education in religious and social matters could process special category data about Kwame.  

Data protection and privacy implications

The development of personalised AI has the potential to bring considerable benefits to users. It could assist in improving:

  • workplace productivity;
  • educational outcomes; and
  • some of the features of existing generative AI.

Whilst there are potential benefits from personalised AI, its expansion presents privacy and data issues. Developers need to ensure that personalised AI is expanded in a privacy positive way.

  • Generative AI: If reliant on a foundation model personalised AI systems, like generative AI more generally, will need to be trained on a large body of information in order to produce high-quality outputs.53 In order to be adapted for specific purposes, models may also need to be 'fine-tuned”.54 They will therefore share most of the data protection issues posed by generative AI more generally. We have already noted these issues and produced targeted communications to advise organisations about their obligations. Our recent blog describes eight privacy related issues which developers need to consider when developing generative AI systems.

But the inclusion of more personalised information and approaches will bring additional issues that are specific to personalised AI systems, including the following:

  • More personal information: Personalised AI solutions process a greater quantity of personal information than other generative AI tools. This information is usually provided by the user. It may be necessary to ensure that the outputs of the generative AI are useful for that user. For example, a system which offers users the ability to automate or semi-automate email responses may need to process information about a user’s writing style in order to replicate it convincingly.

    Similarly, educational systems are likely to need information about a user’s educational attainment and progress. Depending on the educational content and the way in which the system is structured, this information could, for example, also be used to infer information about a person’s religious or philosophical views. If this is the case, then personalised AI systems may be processing special category information, which requires extra protections because of the data’s sensitivity. This could also apply to systems which process or infer personal information about students’ special educational needs. Developers therefore need to ensure that they have appropriate conditions in place for processing such information and have a separate condition for processing under Article 9 UK GDPR.
  • Risk of model inversion attacks: Generative AI models which offer higher degrees of personalisation may be at greater risk of model inversion attacks. This is because of the amount and nature of the personal information they process. These attacks try to extract the information used to train that model by exploiting its outputs.
    These attacks could therefore risk leaking information the user has provided to personalise the service. This could include intimate details on users’ personal lives, such as their finances or aspects of their identity. Organisations therefore need to implement appropriate technical and organisational measures in order to process data securely and the UK GDPR’s requirements concerning encryption in order to mitigate this risk. 
  • PETs: Privacy enhancing technologies (PETs) could help to keep information secure and to implement privacy by design in these models and tools at the outset. For example, differential privacy could be used to obscure the fact that a particular person’s data has been used to train a model. Another PET which could be relevant for personalised generative AI is federated learning. Federated learning may allow developers to minimise the amount of personal information they need to train the model, provide appropriate measures of security and reduce the impact of any potential data breaches. Here, developers face a trade-off, as the use of certain PETs, such as differential privacy and synthetic data, could reduce the effectiveness of the models themselves.55 As the personalised generative AI space continues to grow, experimenting with and the developing of further safeguards and privacy enhancing techniques will be important.

Recommendations and next steps

We are considering further steps to take about personalised AI:

  • We recommend that those developing and deploying personalised AI systems consult our AI guidance and our other publications which specifically concern generative AI.
  • We will continue to monitor the rapid developments in AI generally and will respond accordingly. This includes reviewing and updating our current AI guidance. As the generative AI space continues to develop rapidly, we will continue to monitor new use cases as part of our wider portfolio of AI work. 


36 IBM Research Blog on Generative AI
37 Simon Attard article about grounding Generative AI
38 Gary Marcus article concerning economical potential of generative AI
39 Financial Times article about the sceptical case on generative AI
40 Simon Pollington article about fine-tuning LLMs for Enterprise
41 Personal.AI article entitled Differences Between Personal Language Models and Large Language Models
42 Personal.AI article entitled Your True Personal AI
43 CNBC article about Bill Gates' predictions for the educational possibilities of generative AI
44 Homepage for Maya - Your AI travel assistant
45 New York Times article entitled How ‘A.I. Agents’ That Roam the Internet Could One Day Replace Workers
46 Cousera article about personalized and interactive online learning with generative AI, machine learning, and virtual reality
47 CNBC article about Bill Gates' predictions for the educational possibilities of generative AI
48 Personal AI article entitled What is Personal AI?
49 The Guardian article about Google DeepMind testing ‘personal life coach’ AI tool
50 VentureBeat article about AI clones
51 MSN article about Morgan Stanley plan to launch AI chatbot to woo wealthy
52 New York Times article entitled How ‘A.I. Agents’ That Roam the Internet Could One Day Replace Workers
53 CMA Short Report about AI Foundation Models
54 CMA Short Report about AI Foundation Models
55 Leidos white paper about Privacy Enhancing Technologies