Chatting with my Pilates instructor today and she was offhandedly suggesting she would later ask ChatGPT how many fibers were connected to a specific part of the body.

I jokingly said that it would tell her the same number of 'r's in strawberries '5'.

She did not get it. Even when I tried to explain that ChatGPT makes up facts, she asked 'Is strawberry some code word for programmers?'

She said she'd look up news stories about that issue, I hope she does and gets less confident in ChapGPT.

I just started with her so I did not go into the environmental and social costs of AI.

@MCDuncanLab it's pretty much impossible to have the conversation with people "outside the loop"* because all they see is a constant stream of information strongly affirming that "AI" is amazing and will soon be doing everyone's jobs... everything from the press releases of the AI companies, to the media hapilly regurgitating them, to major governments reinforcing the narrative by talking about how we basically need to prioritise our economies for "AI" (fuck the environment!) In the face of that people just think naysayers are tinfoil hat loonies.

* where by "loop" I mean: Fediverse 😂

@yvan

Agreed, I think it is fortunate that there is something ridiculous that it gets wrong.

It's pretty easy for someone search 'ChatGPT 'r's in strawberry' and get a relevant news story or cute tik tok.

Follow

@MCDuncanLab @yvan I agree with what you're saying however I think we should stop using the strawberry example for two reasons.

First, LLMs are not designed to count, but to predict the next most probable set of words so, you're evaluating a task they were not designed to complete. Second, there are more complex tasks where they do get it right most of the time. It's likely that the muscle fibre question is in many websites that have been used to train ChatGPT, so it probably will give you the correct answer, and the whole strawberry thing can be dismissed as a glitch (by the way the latest version of ChatGPT gives the right answer).

The main issue (leaving ethics aside for a minute) is that these systems work the majority of the time, so it's easy to assume that they work all the time. That is, until they don't work on a serious task. Unfortunately, the AI companies narrative forgets to mention that, and prefers to sell their product as having PhD level intelligence (as if having a PhD was a guarantee of anything!)

· Edited · · Tusky · 0 · 0 · 0
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.