Follow

An intersectional framework for counterfactual fairness in risk prediction. (arXiv:2210.01194v1 [stat.ME]) arxiv.org/abs/2210.01194

An intersectional framework for counterfactual fairness in risk prediction

Along with the increasing availability of data in many sectors has come the rise of data-driven models to inform decision-making and policy. In the health care sector, these models have the potential to benefit both patients and health care providers but can also entrench or exacerbate health inequities. Existing "algorithmic fairness" methods for measuring and correcting model bias fall short of what is needed for clinical applications in two key ways. First, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups such as gender and race. Second, in clinical applications, risk prediction is typically used to guide treatment, and use of a treatment presents distinct statistical issues that invalidate most existing fairness measurement techniques. We present novel unfairness metrics that address both of these challenges. We also develop a complete framework of estimation and inference tools for our metrics, including the unfairness value ("u-value"), used to determine the relative extremity of an unfairness measurement, and standard errors and confidence intervals employing an alternative to the standard bootstrap.

arxiv.org
· · feed2toot · 0 · 0 · 0
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.