So, reading through today's "ai" horror story - link below, it's where this SaaS dude has been vibe coding and let a product called "Replit" have access to prod and apparently delete it - there's something -very- striking that's stood out to me.
The guy asks the 'ai' for what "he" - note the wholly inappropriate anthropomorphization - had done and why.
And then trusts the response.
Now, not to put too fine a point on it, but this is in a context where the human involved has notated -multiple- times that the damn fool thing had made shit up wholesale multiple times.
Why in the fuck did he - the person; I refuse to assign a gender to a machine - trust the LLM's output instead of having transaction logging enabled to audit the actions that the machine had made on the systems in question?
This is some very fucking basic SDLC practice shit that apparently they have failed to implement.
LLM usage rots people's brains, and this is yet more evidence of that.
Like, above and beyond the foolishness of granting prod access in the first place,
where in the fuck are his transaction logs in the first place?
Why is his operations infrastructure so fragile that a rogue user can take down prod during a code freeze?
This is completely unacceptable operations practice from start to finish even before the "ai" gets involved.
I'm curious whether this is something that makes the situation with truthfulness worse or better in the same session. I would estimate worse, because that matches a pattern of someone digging themselves further in while being abused by an authority for their failures that's common in _some_ stories and repetitive when present.