If you've followed the discussion at all, you probably have seen a lot of examples of failure modes of #ChatGPT and other large language models, e.g., their propensity to make stuff up.

I'm curious -- have you been able to do useful things with this technology, personally or professionally, beyond your own amusement? If so, what have been the most valuable uses?

@eloquence

It's a fantastic sparring partner, exactly _because_ it is so fluently mediocre. You'll get the vanilla response, and then you can challenge yourself to realize why this is not good, and how to go beyond that. I find it great to hone my own thinking - not to substitute for it.

@boris_steipe @eloquence The circular irony is that you need to know enough to challenge it, and if you’ve got enough knowledge to challenge it, you didn’t need #chatGPT to begin with. In contrast, the times it’s most likely to be used are when someone doesn’t know enough to challenge it.

Follow

@jeremysayz @eloquence

I disagree.

That's only when your premise is that it knows enough. But that also implies you give up, and concede a computer can think _for_ you. And that the value of whatever you do has now dropped to zero.

When your premise is that you can do better, the AI's baseline is your starting point. Not "knowing enough to challenge it" is not an option. So you start poking the arguments and taking them apart. And you can actually ask it for help when necessary - that's when you get it to think _with_ you.

Using computers to think with us, or for us ... that's what it boils down to.

@boris_steipe @eloquence I agree with that as an aspirational goal. Rephrased: we should aspire to use it to augment our own knowledge as a [mediocre] sparring partner.

The challenge is, I believe, that often it is being used as a substitute for knowledge. I hear people describe uses where they cede their own gaining of knowledge on a subject to the tool. And given that it often “speaks” with unwarranted confidence, people won’t know to poke holes in the wrong parts they are unfamiliar with.

@boris_steipe @eloquence What I'd love to see is a Bayesian approach where it could give an indication on things that it was less confident on!

Most ML/AI is missing this critical piece of readout.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.