Artificial intelligence (AI): The theory and development of computer systems able to perform tasks normally requiring human intelligence.
Fine-tuning: The process of training a pre-trained model on a specific dataset to adapt it for a specialised task, adjusting its parameters slightly instead of starting from scratch. Common in AI applications like NLP and image recognition, it saves time and resources while leveraging the model's foundational knowledge.
Generative AI: Generative AI is a type of AI that can generate outputs that resemble human-created content. Most of the current generative AI systems are based on the Transformer architecture.
Hallucination (also referred to as confabulation): When a generative AI model produces incorrect, misleading or fabricated content due to reliance on patterns in its training data rather than factual understanding.
Invisible processing: Processing where people are not aware their personal data is being processed, undermining their information rights among others.
Machine unlearning: Adjusting a model’s parameters to remove the influence of specific data, though current techniques are not scalable for large AI models like generative AI.
Synthetic data: This is ‘artificial’ data generated by data synthesis algorithms. It replicates patterns and the statistical properties of real data (which may be personal data). It is generated from real data using a model trained to reproduce its characteristics and structure.
Transformer architecture: Transformer architecture is a type of deep learning model designed to process sequences of data, eg text, by focusing on the relationship between different parts of the input. The transformers are fast, flexible, easily scalable and are highly efficient in understanding context, making them the foundation for generative systems.