The more I think about "AI" and ML-related stuff like ChatGPT or Copilot, especially in the context of anything that requires strict correctness (say, generating code to run in production), the more I feel that "Seeing Like a State" is relevant.

It's not a fully-formed thought yet, but it's a start of one.

It has to do with how measuring "success" of a complex process using a limited, simplified set of metrics, is bound to cause problems. And about the inevitable unintended consequences.

Consider how Go players describe the AI they played against as "from an alternate dimension" and talk about "alien" moves:
theatlantic.com/technology/arc

Or that investigating fully what *exactly* a given model optimizes for often leads to… surprises, like with the Jared the Lacrosse Player CV thing:
qz.com/1427621/companies-are-o

We train the AI to do a specific thing and we measure success in a very specific way. AI dutifully optimizes for that set of conditions, and nothing else.

Hilarity ensues.

Follow

@rysiek "After an audit of the algorithm, the resume screening company found that the algorithm found two factors to be most indicative of job performance: their name was Jared, and whether they played high school lacrosse. Girouard’s client did not use the tool." XD

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.