The more I think about "AI" and ML-related stuff like ChatGPT or Copilot, especially in the context of anything that requires strict correctness (say, generating code to run in production), the more I feel that "Seeing Like a State" is relevant.
It's not a fully-formed thought yet, but it's a start of one.
It has to do with how measuring "success" of a complex process using a limited, simplified set of metrics, is bound to cause problems. And about the inevitable unintended consequences.