I held out for a long time, but I finally started using LLMs in my regular coding work. I couldn't deny the utility of tools like Claude any longer, when it can do in 2 minutes what would take me an hour.

There are problems: hallucinations, verbosity, bad/unperformant/inaccessible code. I have to correct it a lot. That's OK.

The main problem I find is that it's sucked a lot of the fun out of coding for me. It's like I've been pushed into a management role when I just wanted to stay a coder.

The other issue of course is that I'm not learning as much when I use LLMs. Even if it's an iterative, back-and-forth process, the tool is doing ~80% of the thinking for me.

It feels like I either need to spend more time learning outside of coding, or just accept at some level that I'm "cashing in my chips" and relying on ~20 years of actual coding experience.

Either way, I feel less excited by these tools than defeated. They're incredible magic wands, but I kind of liked doing my own sorcery?

Of course, this is a choice: I *could* choose not to use LLMs. When I ride my bike, I don't bemoan the fact that a car could do it faster – the goal is exercise.

But nobody's paying me to ride my bike. If I were a delivery driver, it'd be pretty unprofessional to show up an hour late with a cold pizza just because I like biking.

With LLMs, it almost feels like malpractice *not* to use them at this point. I can't justify taking ~3x longer to ship a feature just because I don't enjoy using them.

I'm familiar with all the arguments against GenAI, and I'm a big fan of authors like @baldur whose excellent book, The Intelligence Illusion, I've read twice. It's highly recommended. illusion.baldurbjarnason.com/

I have a degree in computational linguistics, and that's partly why I was so skeptical of these tools for so long. I still am! But more and more I felt like the world was moving on without me. To this day I feel deeply conflicted.

@nolan I'm sure you're being bombarded with replies

But, brushing past the issues with the software industry that are enabling problematic use of LLMs: it's understandable that coders feel conflicted about them, even if you assume the tech works as promised, because you've just changed jobs

You've gone from thoughtful problem-solving to babysitting

Monitoring automation will never be an engaging activity

And, in the long run, a babysitter gets paid much less an expert

Follow

@baldur @nolan the engaging activity of thoughtful problem-solving is usually on a higher level than "implement this basic thing you already know" code-monkeying. The LLMs can automate the latter while not being smart enough for the former. Win-win.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.