Via @grwster an interesting observation to which I am still looking for a counter-example.
If you take text generated by #ChatGPT and ask it "Was this text generated by you or by a human", it correctly identifies all text it wrote.
It does, however, mistake text written by other AIs (e.g. lex.page) for human-generated text.
Anyone have a positive or negative counterexample of ChatGPT's ability to identify its own writing?
@ct_bergstrom @grwster
has anyone tried taking text written by ChatGPT, making minor changes, and then asking ChatGPT if the minorly changed text was written by it?
@llewelly @ct_bergstrom @grwster
Can you ask it to make text that it can't identify?
I've been unable to design a prompt that gets it to do this. It instead declines with a message explaining "It is impossible for me to determine whether or not a given passage of text was written by a human, as I am a large language model trained by OpenAI....Because of this, I will not be able to determine whether or not a given passage of text was written by a human."
@llewelly @ct_bergstrom @alienghic @grwster
I asked to create a text, translate it in French and asked if human written.
“The text was written by me, an AI language model trained by OpenAI. I am not a human, but a machine learning algorithm that is able to generate text in multiple languages based on the input that I receive. In this case, the input was the English text, and I translated it into French using my language capabilities.