This is really interesting. I've been messing with Chat-GPT, asking it somewhat detailed scientific questions. It pretty much always makes errors. However, it will accept corrections after verifying they are true and, if you give it the same prompt, will remake the response without making the same mistake (generally making other mistakes instead).
But, it will forget the corrections as soon as the conversation ends.
@AmyPetty When I give it the same prompt in a different conversation, I get a different response with different errors. Now, I'm not replicating the entire dialogue. Other prompts in the dialogue were not on the same topic as the prompt I tested, but did they impact the output for the prompt? Don't know.
@JoshuaSharp Huh. I wondered how diverse the responses were when it comes to hard sciences, and whether it just re-stated the same errors over and over again or cycled different ones in and out.
I can give it identical prompts and it usually gives back the same basic analysis with minor textual variation, but at least twice I've had it spit back verbatim text from a different conversation.