The more I think about "AI" and ML-related stuff like ChatGPT or Copilot, especially in the context of anything that requires strict correctness (say, generating code to run in production), the more I feel that "Seeing Like a State" is relevant.

It's not a fully-formed thought yet, but it's a start of one.

It has to do with how measuring "success" of a complex process using a limited, simplified set of metrics, is bound to cause problems. And about the inevitable unintended consequences.

Consider how Go players describe the AI they played against as "from an alternate dimension" and talk about "alien" moves:
theatlantic.com/technology/arc

Or that investigating fully what *exactly* a given model optimizes for often leads to… surprises, like with the Jared the Lacrosse Player CV thing:
qz.com/1427621/companies-are-o

We train the AI to do a specific thing and we measure success in a very specific way. AI dutifully optimizes for that set of conditions, and nothing else.

Hilarity ensues.

It's kind of like such models become tools for generating examples of Goodhart's Law:
en.wikipedia.org/wiki/Goodhart

> When a measure becomes a target, it ceases to be a good measure

We measure success in a particular way, so for the model we're training, that measure becomes the target.

(I'd prefer to avoid ascribing intentionality here, but I struggle with the language)

ML is a tool for turning measures into targets? 🤔

Follow

@rysiek

Yes. It's a tool that does that much faster than groups of humans.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.