The more I think about "AI" and ML-related stuff like ChatGPT or Copilot, especially in the context of anything that requires strict correctness (say, generating code to run in production), the more I feel that "Seeing Like a State" is relevant.
It's not a fully-formed thought yet, but it's a start of one.
It has to do with how measuring "success" of a complex process using a limited, simplified set of metrics, is bound to cause problems. And about the inevitable unintended consequences.
Consider how Go players describe the AI they played against as "from an alternate dimension" and talk about "alien" moves:
https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/
Or that investigating fully what *exactly* a given model optimizes for often leads to… surprises, like with the Jared the Lacrosse Player CV thing:
https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased
We train the AI to do a specific thing and we measure success in a very specific way. AI dutifully optimizes for that set of conditions, and nothing else.
Hilarity ensues.