#ChatGPT - How much is too much?
As far as #HigerEd is concerned, the answer is simple: "Assistance from an #AI system is too much when it interferes with the educational objectives, or the assessment of a submitted work."
But really, there is nothing simple here: this question goes to the core of what we mean by "educational objectives", what is the "work" that we are assessing, and how we are assessing in the first place.
I unravel this in the latest post at the #SentientSyllabus Project" - it's a bit of a deep dive, before we come up for air again. In the end, it's all about #joy .
https://sentientsyllabus.substack.com/p/how-much-is-too-much
I just came up with a use of #ChatGPT that promises to be a huge time-saver: reformatting of bibliographies. For reasons, I never follow the standard styles when I write. Of course editors usually insist on whatever. So here's ChatGPT:
Human: Please reformat the following reference in MLA style:
Alster, Bendt (1973). “An Aspect of ‘Enmerkar and the Lord of Aratta’”. Revue d'Assyriologie et d'archéologie orientale 67(2): 101–110.
ChatGPT:
Alster, Bendt. "An Aspect of 'Enmerkar and the Lord of Aratta'." Revue d'Assyriologie et d'archéologie orientale 67, no. 2 (1973): 101-110.
Human:
Now give me a bibtex for that.
ChatGPT:
Here is a BibTeX entry for the reference you provided:
@article{alster1973aspect,
title={An Aspect of 'Enmerkar and the Lord of Aratta'},
author={Alster, Bendt},
journal={Revue d'Assyriologie et d'archéologie orientale},
volume={67},
number={2},
pages={101--110},
year={1973}
}
... that looks correct to me – of course it will need proofreading, but it looks like it will save a lot of pain.
🙂
Auf Englisch kkappts aber gut ...
Our resources are #HigherEd focussed, but education is education. You may find it useful: http://sentientsyllabus.org
#SentientSyllabus #ChatGPT
Interesting. They are missing a really powerful and easy to implement idea: require sources, and require to provide both the reference and a link to the reference. See our post here https://sentientsyllabus.substack.com/p/chatgpts-achilles-heel - and you're welcome to pass this along. There's more on the project homepage - linked from the article.
Yes! Check out the resources we wrote over at http://sentientsyllabus.org – you might find the Course Activities document covers this, at least i part.
Ist nur viel zu leicht zu umgehen. Einfach mal ausprobieren – hin und wieder ein nicht-ascii Zeichen, bspw. aus dem Kyrillschen Unicode-Block: schon wirkt es nicht mehr Computer generiert.
Was übrigens viel problematischer ist sind die falsch positiven Ergebnisse. Die dürften im Akademischen Bereich von Null nicht zu unterscheiden sein, wenn wir das für irgendwelches Nachprüfen von Unregelmāßigkeiten einsetzten wollen.
Sie können gerne mal reinschauen was wir im Sentient Syllabus Project dazu schreiben. http://sentientsyllabus.org
Hm. I thought what Gary Kasparov wrote in his 2018 editorial in Science was worth pondering on. He gave high praise to AlphaZero, commending its ability to trade material for activity and concluded: "Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth." (Science 362(6419): 1087 ).
At least Kasparov, who knows a thing or two about board games seems to think it goes beyond the surface level. And, incidentally, regarding AlphaGo itself, Ke Jie is quoted as having said "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go." (quoted from WP:AlphaGO)
Kawabata Yasunari (and others) emphasize the non-self/non-action or 無為 of playing Go well. Non-self, that would be a strength of an AI.
Of course – if we understand the beauty of the game as being rooted in its relational aspects, as a game between humans, as a competition of minds, that determines a different kind of aesthetics. But the masters seem to appreciate a different kind of truth.
You might find some of the resources useful that we have put together at #SentientSyllabus http://sentientsyllabus.org - there is a resource document specifically on these kinds of critique activities.
"But if it can be classified, then it can be generated" .. Ah, yes - but that's not to say it is useful. Novelty is necessary, but not sufficient. The major breakthrough will come when the algorithms learn to evaluate the quality of their proposals in a generalized context. Keywords in this domain are "ranking" and "evaluation".
@MrBerard@pilote.me @ZulmaCucunuba
"Personal communication" is very much a thing in scholarship – just because some information was not published neither means we can claim that the idea was ours, nor are we prohibited to communicate the idea. That's why the "personal communication" is for. Each common citation format has their own conventions - here is the link to APA: https://apastyle.apa.org/style-grammar-guidelines/citations/personal-communications
Calling AI output a "Synthesized Communication" fits this case perfectly. We are reporting another's words, but the format is volatile.
Mind you: that does not remove the obligation to find the source of an idea, if that is the focus of our writing. "ChatGPT thinks that Fleischman and Pons invented cold fusion." can be cited as a "synthesized communication". "Fleischman and Pons invented cold fusion." needs a source.
HTH
🙂
Interesting thought – I'll read it.
Wondering about the relationship between a self-fulfilling process and a recursive process ...
What exactly is a self-fulfilling prophecy? What exactly is wrong with self-fulfilling prophecies? What does this mean for automated prediction?
I have a new paper, co-authored with Mayli Mertens, on these topics. Forthcoming in the journal Ethical Theory and Moral Practice.
Preprint now available at PhilPapers:
https://philpapers.org/rec/KINSPI-4
You're welcome. You are right that modified text would evade the watermark - but the filtering doesn't have to be just the statistical distribution of the generation process ... for larger text you could filter according to the perplexity of the text itself. Or put differently: accept only text to the training data that actually has something new to say.
What a radical idea: we might even apply such a filter to human discourse. Wouldn't that be nice 🙂
And over at the Sentient Syllabus Project, we don't have all the answers to that either, but some of them by now. May be helpful: http://sentientsyllabus.org
We have been working on public-good HigherEd resources at the Sentient Syllabus project. Could use some good (human) ideas on how AI-tools affect learning outcomes (or whether they actually are affected at all) – that's a topic that needs some TLC. Care to have a look what we have? http://sentientsyllabus.org
I keep typing thank-you notes into #ChatGPT when I'm done working. It's not a conscious decision, just force of habit.
But now that I think about it I think that's a moment more about me than the tool - it's something I believe in. Giving gratitude feeds the giver's soul as much as the receiver's.
So say "yes" to being polite to #AI. It's good for you, and hey, someday the AI might even be capable of appreciating it.
Good question.
You might know that ChatGPT itself is working on a digital watermarking project, based on pseudorandom choices on its output distributions – it's touted as an anti-propaganda, or anti-plagiarism tool, which actually doesn't make sense because of the question who has access to the key. However what you describe makes perfect sense: filtering of crawled corpora is actually a really good use case.
I think, there is a much bigger, as yet untapped (at least not yet publicized) data source which won't have that problem for a while: Google Books.
We've been building resources and writing analyses at the Sentient Syllabus Project. http://sentientsyllabus.org The goal is to identify principles that help us focus on our real goals – for all manners of course activities.
Here's a thought I haven't seen elsewhere: you can relentlessly criticize the AI, and no one gets hurt. Lot's of opportunities in that. For example, have a discussion: you prompt the AI, they defeat its arguments. (Actually - that's a real-world skill right there.)
Exactly! We write about "real" approaches at the Sentient Syllabus project - resources at http://sentientsyllabus.org and analysis at https://sentientsyllabus.substack.com – you may find this interesting