Skip to main content

How do we ensure our IoT products process personal information fairly?

Contents

Fairness in a data protection context is about how you do the processing as well as its outcome (ie its effect on people).

For processing to be fair, you must use information in ways that: 

  • people would reasonably expect; and
  • do not have unjustified adverse effects on them. 

Fairness is closely linked to transparency – part of ensuring your processing is fair is explaining to users how you use their information. But it’s not just about this. For example, you are unlikely to be treating people fairly if your IoT product collects personal information that’s not necessary for its functions. 

You should regularly review how you use personal information in your IoT products. This can minimise the risk of unfair outcomes for users. 

For example, you could set up collaborative ways of working between your design, technical, legal and policy teams to ensure there is alignment between the product’s features and the personal information you need to collect for it to function properly. 

You could provide guidance and training to your staff on how to ensure data minimisation in product design and development. 

Fairness is also closely linked to necessity, proportionality and purpose limitation

How does the fairness principle apply when an IoT product uses AI?

It is particularly relevant to consider fairness if your IoT product uses AI – for example, to help you analyse information about people’s interactions with the product and any associated service like an app. 

This is because AI technologies can be susceptible to bias, which can result in discriminatory outcomes. 

You must ensure your IoT products perform accurately and produce unbiased, consistent outcomes. 

Example 

A fitness tracker manufacturer develops an AI system that calculates users’ health and fitness levels. It will use the AI system to suggest personalised exercise and nutrition plans to the users. 

The manufacturer trains the AI system on a large dataset that includes height, weight, activity levels, body mass index and fat composition. 

In testing, the manufacturer finds that the AI system gives women lower fitness scores because of their naturally higher healthy body fat composition than men. 

In this case, the AI system puts a certain group (women) at a disadvantage, which seems discriminatory. 

There are many reasons why the system might give women a lower score. In this case, the probable reason is imbalanced training data with more men than women.

You must ensure that any AI technologies you involve in your IoT products are sufficiently statistically accurate and avoid discrimination. You should conduct regular checks of your systems for accuracy and bias.

For more resources on how to identify risks to fairness in AI and how to mitigate them, see our guidance on fairness in AI