I feel like this is true for lots of kinds of conversations and not just tech interviews.

People are correctly pointing out that, if you dig into the logic of basically anything, it falls apart, but that's also generally true of actual humans, even experts.

Sure, twitter.com/YossiKreinin/statu is ridiculous, but have you tried asking an expert coach on almost any topic why you should do X? I think the level of reasoning is fairly similar to what Yossi observes ChatGPT doing.

E.g., try listening to one of the top paddling coaches in the world explain *why* (the what is good, the why is nonsense) youtube.com/watch?v=VqXIF4ToUc

Why do you let the boat "run" between strokes? The explanation is because the boat is moving at top speed when you remove the paddle from the water, so putting the paddle back in slows you down.

But of course this is backwards. The boat decelerates when you're not pulling. Your top speed is then because you just finished pulling and started braking!

Lawler even notes that you only want to put the paddle back into the water ASAP when accelerating. But, if putting the paddle in ASAP slows you, why do only do it when accelerating? You only want to slow down when accelerating? None of his explanations of why you should do anything make sense.

And this is fairly typical of when you ask why, e.g., when I asked why wider tires have better grip, I had to go to page 7 of Google before getting an answer that wasn't ChatGPT-like obvious nonsense.

Per twitter.com/danluu/status/1304, every explanation on the first six pages of Google results was missing at least one key factor, and most explanations were ChatGPT-like nonsense, in that they contained contradictions that someone with no expertise in the area can observe.

Although you can't publicly play with it, I've heard that Google's chat bot is much better at outputting text that doesn't contain these kinds of "obvious" logical contradictions than ChatGPT and it's been better for years.

Because I'm a fallible human who's prone to all sorts of blunders, it took me two hours to remember that, in 2015, I wrote a blog post about how AI doesn't need to be all that good to displace humans due to human failures in basic tasks:

danluu.com/customer-service/

Of course I would've integrated this into the original thread had I remembered it earlier, but my memory is only above average, which is to say that it's absolutely terrible when measured absolutely and not relative to other humans.

Now that ChatGPT has been out for a while, I've seen quite a few "thinkpieces" about how ChatGPT that talk about the danger of the ability to produce large amounts of vaguely plausible but also obviously wrong text since most people don't check plausibility.

I don't really disagree, but I find it ironic that these pieces tend to contain the same kinds of errors they're calling out, e.g., the recent one that said "/." is a CLI reference (I asked ChatGPT and it told me a similar wrong thing) and

had the standard pop-misconception about the dark ages and the englightenment (unike the author, ChatGPT didn't get this wrong).

Without context, these pieces could appear to be ironic ChatGPT fuelled articles, but they're all from people I recognize as having produced similar work for years, which makes it seem like the articles are genuine.

Scale and accessibility matter, there's a big difference between ChatGPT and these authors, but the irony still tickles my funny bone.

I've also seen a lot of the reverse — people saying that, e.g., professors saying that ChatGPT is a big deal is wrong because, actually, students produce better work because ChatGPT makes such obvious errors.

IMO, as someone who's graded assignments at two different decent universities (worldwide rank ~50 and ~100), that seems quite wrong.

For short essays (the kind people often get for homework or exams), ChatGPT is well above median student performance on short essays at decent schools.

Follow

@danluu

This seems like a very effective condemnation of short essays as something that is intended to be graded, no? (We can see something that can respond to prompts for essays, but is bad at answering simple questions from the same domains.)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.