For once, an issue where I *can* speak with a certain amount of authority.

I've been hearing "expert systems outperform human , so pretty soon human will be obsolete" for a few years now. It's closely akin to " fly themselves these days, so what do we need for?" In both cases, people are paying attention to the best-case scenario with no understanding of the *enormous* number of how many and various the worse cases really are.

This is complicated by the fact that in most of (although not necessarily medicine, the author's specialty) and in nearly all of air travel, the best case is also the normal case. Most of the time, whatever is wrong with you can be diagnosed and treated. Almost all the time, when you get on a plane, you'll walk off at the other end of the trip as healthy as when you boarded. It's reasonable to expect those outcomes.

Not-best and not-normal cases add up really fast.

Without any false modesty whatsoever: as a , I learned a truly impressive degree of clinical judgement. From the first moment I saw a patient, I had a pretty good idea of , , and . (Sadly, if my initial call was "this one's not going to make it," I was almost always right. The exceptions kept me going.) I learned from the best—one of my mentors had learned *his* trade in rural Guatemala, where resources were terribly sparse and human judgement was the only line between life and death. He held back death for decades, and it came for him far too early. Gene Gibbs, RIP.

I can't code that. Neither can anyone else, and if they tell you they can, they're lying.

As a , I've done a fair amount of work in (). The idea is simple, and valid: no human, or team of humans, can remember everything they need to know. There's simply too much knowledge for the brain to hold and recall on demand. Subtle relationships exist between disparate types of data that *nobody* knows, until we tease out the numbers. We're doing this, right now. It is saving lives and relieving suffering, right now.

The key word there is "support." Humans still absolutely, positively, 100% need to be in the loop.

Maybe that will change, someday. I'm not saying it's impossible, for two reasons. First, any time anyone says "computers will never be able to ___" they're usually proven wrong. Second, I don't want to limit my and my colleagues' imaginations. We need to stay focused, but it is a *good thing* for our reach to slightly exceed our grasp. That's how happens!

Just not this day, and not for many days to come. Right now, we need to keep muddling along. There's not much more human than that.

fastcompany.com/90863983/chatg

@medigoth What I find fascinating is the challenge of introducing such expert systems so that they don't make the humans less good at what they do by a) having them fall out of practice (in the same way that we don't spell carefully anymore) and b) removing the situational awareness that comes with having been part of the earlier normal decisions before things went abnormal (autopilot kicks out only when things are very wrong and the driver/pilot has no idea what is happening).

Follow

@denniskpeters Yes, very much this. To some degree it's inevitable, but we *really* need to have fallback plans.

When my father was a NASA engineer, they talked about (I think I'm remembering this) "FO-FO-FAS," i.e. fail-operational, fail-operational, fail-safe. If one major subsystem goes out, the craft still works. If two fail, the same. If three, everyone still gets home alive. Medicine needs the same standard, and I'm not at all sure we're building that in to current systems.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.