Another insightful banger from @pluralistic:

pluralistic.net/2024/04/01/hum

This one focuses on LLMs and the idea of the "reverse centaur", where a robot does the fun stuff while a human does the tedious, error-prone work.

I'll note from the periphery that, despite the current hype, AI is more than LLMs. There are other AI systems (e.g., Chess and Go players, VLSI design tools) that *do* have an internal model of the domain about which they are reasoning. Unfortunately, there's a slippery continuum:

- solves the problem perfectly and deterministically
- significantly outperforms any human
- about as good as an expert human, but makes different, weird mistakes
- meh, output looks vaguely plausible

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.