We're in Nature with an opinion piece on how researchers should respond to #ChatGPT and conversational AI technology more generally!

It's been an interesting experience to reach consensus in an interdisciplinary team of scholars (2 psychologists, 1 computer scientist, 1 philosopher and me, an NLP-er).

We list 5 priorities:
1. Hold on to human verification
2. Develop rules for accountability
3. Invest in truly open LLMs
4. Embrace the benefits of AI
5. Widen the debate

nature.com/articles/d41586-023

Follow

@wzuidema

Great work. Thank you!

You are spot on where you write: "This defies today’s binary definitions of authorship, plagiarism and sources, in which someone is either an author, or not, and a source has either been used, or not. Policies will have to adapt, but full transparency will always be key."

I think this continuum applies to the question of accountability as well (I developed that a bit here: sentientsyllabus.substack.com/ ). There, I propose to leave the decision of co-authorship to the authors. It is certainly not deceptive to do that, which distinguishes it from gift-, ghost-, and guest-authorship. Transparency is key.

An unresolved implication is a desire to document process. That would be great, but adding another _dimension_ (progress) to linear text is conceptually difficult and I am not aware of technical approaches.

Your proposal for non-profit LLMs is interesting, but will ultimately run up against the same concerns as private sector LLMs - simply due to the need of significant funding for training and operation. An alternative might be public LLMs, modelled on our public library systems. I have not seen that discussed yet. Certainly very doable at EU scale.

Thank you for this contribution.

@boris_steipe Thanks -- those are interesting additions and alternative suggestions!

I think the difference in position about authorship disappears when we rethink what authorship will mean in the future. And I think I agree on public LLMs.

Not sure if I fully understand your point about documenting process.

@wzuidema

We expect authors' transparency "to what extent" AI technologies were used. In my analysis I proposed some qualitative language, but whether that is enough transparency can be questioned. It would not make the actual flow of ideas explicit or verifiable - that is what I mean by "documenting process"; a small point, but central to the debate.

In the absence of verifiability, all we have is trust.

Realizing that has its own implications.

🙂

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.