Another interesting thing about my small #ChatGPT experiment was that some of the references made me think that maybe I'd been scooped, that someone had beaten me to my theoretical formulation by a couple of years.

I hadn't, so that's a relief. Nevertheless, I can imagine that it won't be long before someone (probably a corporation) figures out a way to scrape the prompts from ChatGPT and other AI engines to beat other researchers to the punch.

#AcademicMastodon

Follow

@yasha

I had written on that some six weeks ago. Apologies if this comes up again and again. This behaviour is exactly expected from the way such generative models work, there is nothing nefarious or malicious involved, and once one understands why this happens, one can still make good use of the results.

sentientsyllabus.substack.com/

This invites us to pay more attention to fact checking. That's not a bad thing.

🙂

@boris_steipe

Thank you for sharing that. Your thoughts mirror my own. I very much see the value in even the hallucinated references ChatGPT produces. Or at least the potential value. Some of the sources it invented for me at least uncovered researchers in the field worth exploring, and all of them pointed to journals worth exploring as potential targets for my own work.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.