It is no longer enough to build models that make accurate predictions. We also need to make sure that those predictions are fair.
Doing so will reduce the harm of biased predictions. As a result, you will go a long way in building trust in your AI systems. To correct bias we need to start by analysing fairness in data and models.
Measuring fairness is straightforward.
Understanding why a model is unfair is more complicated.
If you want to understand more, you can read this article as a start:
https://towardsdatascience.com/analysing-fairness-in-machine-learning-with-python-96a9ab0d0705
PhD Student at UCD | Writer for Towards Data Science | I write about IML, Algo. Fairness and Data Exploration ✍️🤓