It is no longer enough to build models that make accurate predictions. We also need to make sure that those predictions are fair.

Doing so will reduce the harm of biased predictions. As a result, you will go a long way in building trust in your AI systems. To correct bias we need to start by analysing fairness in data and models.

Measuring fairness is straightforward.

Understanding why a model is unfair is more complicated.

If you want to understand more, you can read this article as a start:

towardsdatascience.com/analysi

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.