Follow

Why is explainable AI important when we can't even explain how we arrived at a decision very precisely as major processing happens non-consciously in the brain?

@karthikakamath I'd argue that scale makes it important. Were we talking about an isolated thing, marveling at how a robot or child could learn to walk for example - not super important how it happens.

when we start talking about entity X deciding it was a good idea to run somebody over, suddenly the 'why' becomes a lot more pressing.

@karthikakamath
I'm going to assume this is an actual question and not rhetorical, if I misread the context, please ignore me 😅

In the case of people, we can both empathize (e.g. simulate being in another person's position and evaluate if the person behaved reasonably) and request an explanation of a set of actions and their motivations, to determine if a legal, ethical, or moral code of conduct was breached. While certain decisions are made non-consciously, others are made consciously, and both can usually be explained post-hoc by the person in question, even if the explanation isn't perfect. In this case, liability can be established, and restitution made to victims if needed.

Systems entirely reliant upon black-box models makes determining liability a massive issue. Increasingly powerful actors on the global stage (e.g. Tesla, Waymo, etc) may be able to shirk responsibility when failures occur and/or foist blame onto others. Look into the Rafaela Vasquez case for more info; it's not entirely as cut and dry as many sources make it out to be IMO. There were failures on both parts, but the driver is now in the hot seat, despite the algorithm failing to alert to oncoming danger in a reasonable time.

Finally, I would say anytime you have code being used to control things IRL that can have serious ethical (driving cars), financial (trading algos), or life altering impacts (cancer diagnosis classifiers), you want to be damn sure that you understand exactly what the model is doing and why it's coming to a conclusion to make sure we are maximizing our accuracy and allowing other systems to cross-validate our results.

P.S. explainability may make it easier to prune or improve existing models with rules based approaches, rather than extensive training times, but that's a whole other thing, which I will humorously refer to with the following image.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.