Follow

This is really interesting. I've been messing with Chat-GPT, asking it somewhat detailed scientific questions. It pretty much always makes errors. However, it will accept corrections after verifying they are true and, if you give it the same prompt, will remake the response without making the same mistake (generally making other mistakes instead).

But, it will forget the corrections as soon as the conversation ends.

@JoshuaSharp I played around with that idea a few weeks ago. Asked it several sociology and history related questions. Every time I argued with it, it would self-correct for the duration of that instance, but they'd all be gone after a hard refresh.

@JoshuaSharp Have you noticed that it makes the *same* errors over and over again when you re-start the conversation? Or does it make different ones each time?

@AmyPetty When I give it the same prompt in a different conversation, I get a different response with different errors. Now, I'm not replicating the entire dialogue. Other prompts in the dialogue were not on the same topic as the prompt I tested, but did they impact the output for the prompt? Don't know.

@JoshuaSharp Huh. I wondered how diverse the responses were when it comes to hard sciences, and whether it just re-stated the same errors over and over again or cycled different ones in and out.

I can give it identical prompts and it usually gives back the same basic analysis with minor textual variation, but at least twice I've had it spit back verbatim text from a different conversation.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.