Another insightful banger from @pluralistic:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This one focuses on LLMs and the idea of the "reverse centaur", where a robot does the fun stuff while a human does the tedious, error-prone work.
I'll note from the periphery that, despite the current hype, AI is more than LLMs. There are other AI systems (e.g., Chess and Go players, VLSI design tools) that *do* have an internal model of the domain about which they are reasoning. Unfortunately, there's a slippery continuum:
- solves the problem perfectly and deterministically
- significantly outperforms any human
- about as good as an expert human, but makes different, weird mistakes
- meh, output looks vaguely plausible