@dealingwith "Summing up the top [user agent] groups, it looks like my server is doing 70% of all its work for these fucking LLM training bots that don't to anything except for crawling the fucking internet over and over again."

@dealingwith I love working with LLM coding assistants. For a mediocre-at-best dev like me they're a super power-up. This thoughtless abuse through misuse makes me sick though.

@maphew
have you not just admitted you are not qualified to judge the quality of their output?

As the old saying goes "Since debugging is harder than writing code, when you use all your smarts to write code you've just written code you're not smart enough to debug"

@dealingwith

Follow

@EndlessMason @maphew @dealingwith

I've given it a fair shot, but even relatively simple functions almost inevitably contain completely invalidating flaws that require a rewrite from scratch -- it's not a matter of just "fixing the bug".

What's more, in these cases the chatbot essentially lies about what its code does.

I honestly think they were trained substantially on StackOverflow and the like. And the problem with that is obvious: it's a site dedicated to problematic code posted by the confused and uncomprehending.

And the chatbots are "learning" accordingly: how to write code that seems correct but isn't, and how to describe what you meant to do and not what you did.

@pieist >they were trained substantially on StackOverflow and the like. And the problem with that is obvious: it's a site dedicated to problematic code posted by the confused and uncomprehending.

+1. It's amazing what's come out of the brute force "stuff everything in and see what comes out" but I don't think we've seen what's possible with intelligent selection of training material.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.