Skip to main content

Policies and procedures

Contents

At a glance

  • Whether you create new policies and procedures or update existing ones, they should cover all the ‘explainability’ considerations and actions that you require from your employees from concept to deployment of AI decision-support systems.
  • Your policies should set out what the rules are, why they are in place, and who they apply to.
  • Your procedures should then provide directions on how to implement the rules set out in the policies.

Checklist

Our policies and procedures cover all the explainability considerations and actions we require from our employees from concept to deployment of AI systems.

Our policies make clear what the rules are around explaining AI-assisted decisions to individuals, why those rules are in place, and who they apply to.

Our procedures give directions on how to implement the rules set out in the policies.

In more detail

Why do we need policies and procedures for explaining AI?
What should our policies and procedures cover?

Why do we need policies and procedures for explaining AI?

Your policies and procedures are important for several reasons, they:

  • help ensure consistency and standardisation;
  • clearly set out rules and responsibilities; and
  • support the creation/ adoption of your organisational culture.

These are all highly desirable for your approach to explaining AI-assisted decisions to individuals.

You may want to create new policies and procedures, or it might make more sense to adapt and extend those that already exist, such as data protection and information management policies, or broader information governance and accountability frameworks.

How you choose to do this depends on the unique set up of your organisation. What matters is that there is a clear and explicit focus on explaining AI-assisted decisions to individuals, why this is necessary and how it is done. This will help to embed explainability as a core requirement of your use of AI. It may be that you have several current policies you can add AI into. If this is the case, you should document your processes as accurately as possible, and cross reference these policies where necessary.

What should our policies and procedures cover?

Both your policies and procedures should cover all the explainability considerations and actions that you require from your employees from concept to deployment of AI decision-support systems. In short, they should codify what’s in the different parts of this guidance for your organisation.

Your policies should set out the what, why and who of explaining AI-assisted decisions to individuals, ie they should make clear what the rules are, why they are in place, and who they apply to. Your procedures should set out how you explain AI-assisted decisions to individuals, ie directions on how to implement the rules set out in the policies.

The table below summarises the key areas to cover in your policies and indicates where it is beneficial to have an accompanying procedure.

This is only a guide. Depending on what you already have in place, you may find it is more important to provide more (or less) detail in certain areas than others. There may also be additional aspects, not covered in this table, that you wish to cover in your policies and procedures.

As such, if there are policies and procedures listed here that you feel do not apply to your organisation, sector, or the type of AI system you have implemented, you may not need to include them. The level of detail is likely to be proportionate to the level of risk. The more impactful and the less expected the processing is, the more detail you are likely to need in your policies, procedures and documentation.

If you procure a system from another organisation, and do not develop it in-house, there may be some policies and procedures that are required by the vendor and not you. However, as you are still the data controller for the decisions made by the system, it is your responsibility to ensure the vendor has taken the necessary steps outlined in the table.

You should consult relevant staff when drafting your policies and procedures to ensure that they make sense and will work in practice.

Policy Procedure
Policy objective Explain what the policy seeks to achieve – the provision of appropriate explanations of AI-assisted decisions to individuals. Even if you incorporate your requirements around explaining AI-assisted decisions into an existing policy or framework, you should ensure that this particular objective is explicitly stated. N/A
Policy rationale Outline why the policy is necessary. You should cover the broad legal requirements, and any legal requirements specific to your organisation or sector. You should also cover the benefits for your organisation, and, where relevant, link the rationale to your organisation’s broader values and goals. N/A
Policy scope Set out what the policy covers. Start by clarifying what types of decision-making and AI systems are in scope. Say which departments or parts of your organisation the policy applies to. Where necessary, explain how this policy links to other relevant policies your organisation has, signposting other organisational requirements around the use of AI systems that are not within this policy’s scope. N/A
Policy ownership Make clear who (or which role) within your organisation has ownership of this policy, and overarching responsibility for the ‘explainability’ of AI decision-support systems. Explain that they will monitor and enforce the policy. Detail the steps that the policy owner should take to monitor and enforce its use. Set out what checks to make, how often to make them, and how to record and sign-off this work.
Roles Set out the specific roles in your organisation that have a stake or influence in the explainability of your AI decision-systems. Describe the responsibilities of each role (in connection with explaining AI-assisted decisions to individuals) and detail the required interaction between the different roles (and departments) and the appropriate reporting lines, all the way up to the policy owner. N/A
Impact assessment Explain the requirement for explainability to be embedded within your organisation’s impact assessment methodology. This is likely to be a legally required assessment such as a Data Protection Impact Assessment, but it could also form part of broader risk or ethical assessments. Explicitly state the need to conduct the assessment (including considering explainability) before work begins on an AI decision-support system. Describe how to make the necessary explainability assessments including consideration of the most relevant explanation types, and the most suitable AI model for the context within which you will be making AI-assisted decisions about individuals.
Awareness raising Explain the importance of raising awareness about your use of AI-assisted decisions with your customers or service users. Set out the information you should communicate including why you use AI for decision-making, where you do this, and simple detail on how it works. Detail the specific approach your organisation takes to raising awareness. Clarify where you host information for your customers, how you will  communicate it or make it available (eg which channels and how often), and which roles or departments are responsible for this.
Data collection Underline the need to consider explanations from the earliest stages of your AI model development, including the data collection, procurement and preparation phases. Explain why this is important with reference to the benefits of interpretable and well-labelled training data. Set out the steps to take at the data collection stage to enhance the explainability of your AI model, including assessing data quality, structure, feature interpretability, and your approach to labelling.
Model selection Explain how considerations of explainability factored in to the selection of your AI model in its development stage and how the algorithmic techniques you chose to use are appropriate for both the system’s use case and its potential impacts.  Set out the steps you took to weigh model types against the priority of the interpretability of your AI system. Signpost how you ensured that the selected model is appropriate to fulfil that priority.
Explanation extraction Set out the different types of explanation and outline the requirements to obtain information relevant to each one. Signpost the various technical procedures your organisation uses to extract the rationale explanation from your AI models (eg how to use a local explanation tool such as LIME) or how you use visualisation or counterfactual methods. Describe how to obtain information on the other explanations, who to obtain this from, and how to record it.
Explanation delivery Explain the need to build and deliver explanations in the way that is most meaningful for the individuals your AI-assisted decisions are about. Detail how to prioritise the explanation types, how to translate technical terminology into plain language, the format in which to present the explanation (eg in layers), and how to assess appropriate timing of delivery (eg before or after a decision).
Documentation Clearly state the necessity to document the justifications and choices made through the whole process of developing/ acquiring and deploying an AI decision-support system. Outline the requirement to document the provision of explanations to individuals, and clarify what information to record (eg URN, time stamp, explanation URL). Set out the standardised method by which all stakeholders can record their justifications and choices. Explain how your organisation keeps an audit trail of explanations provided to individuals, including how this can be accessed and checked.
Training Set requirements for general staff training on explaining AI decisions to individuals, covering why it’s necessary, and how it’s done. Identify roles that require more in-depth training on specific aspects of explaining AI-assisted decisions, such as preparing training data or extracting rationale explanations. N/A