Skip to main content

How do we make sure our use of personal information is fair?

Contents

The fairness principle in data protection law means you must only process personal information in ways that:

  • people would reasonably expect; and
  • could not have unjustified adverse effects on them.

Fairness in a data protection context is about how you go about the processing and the outcome of the processing (ie the impact it has on people).

When carrying out content moderation, you must not process people’s personal information in a way that they might find unexpected, misleading or unforeseen. Fairness is closely linked to transparency. Part of ensuring your processing is fair is expaining to users how you use their information. (See the section on ‘How should we tell people what we’re doing?’ for more information.)

You must ensure that your content moderation systems perform accurately and produce unbiased, consistent outputs. You are unlikely to be treating users fairly if you make inaccurate judgements or biased moderation decisions based on their personal information. 

You should regularly review how you use personal information in your content moderation processes to minimise the risk of unfair outcomes for users. For example, you could provide guidance and training to your moderators on how to make consistent and fair moderation decisions based on personal information. You could also audit moderator decisions periodically to check that they use personal information consistently and fairly.

It is particularly relevant to consider fairness if you are using AI technologies to help you analyse content. This is because AI technologies can be susceptible to bias, which can result in discriminatory outcomes.

You must ensure that any technologies you use to process personal information are sufficiently statistically accurate and avoid discrimination. You should conduct regular checks of your systems for accuracy and bias.