Why is explainable AI important when we can't even explain how we arrived at a decision very precisely as major processing happens non-consciously in the brain?

Follow

@karthikakamath I'd argue that scale makes it important. Were we talking about an isolated thing, marveling at how a robot or child could learn to walk for example - not super important how it happens.

when we start talking about entity X deciding it was a good idea to run somebody over, suddenly the 'why' becomes a lot more pressing.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.