Show newer

My of Google's live presentation in Paris yesterday.

It's not about a head-to-head between Bard and Bing at all. The battleground will be data, integration, and Augmented Reality everywhere. Also, find an interesting implicit take on copyright: "interpretation".

Looking at all this, I can't shake the feeling that we did not want AI powered search after all, but ...

sentientsyllabus.substack.com/

Trying to get the issue of academic misconduct right leads to a lot questioning.

I just posted an analysis on generated (AI) writing over at :
sentientsyllabus.substack.com/

Mey key point is that we need a principled and deep understanding of academic integrity, and that can be provided by a commitment to truth. Accordingly the source of misconduct is "deceit", and not the nature of the text itself. We need to keep in mind however, that we are unlikely to be able to detect all such deceit. Therefore we need to build a collaborative framework of education. Creating an academic work ought to be as enjoyable as writing your own diary.

Many comments are posted on OpenAI's work towards a classifier that distinguishes generated text from human written text. According to OpenAI's own announcement (a) input texts were not taken from an adversarial source, i.e. no deliberate attempts were made to obfuscate AI authorship; (b) there was a false positive rate of 9%, i.e. for 9% of the texts the tool evaluated a human written text as being AI generated.

These two points make the tool currently unfit for an academic misconduct allegation, where the false positive rate has to be practically indistinguishable from zero, and where efforts to obscure the authorship will be encountered.

Indeed, there is a short video in a response to OpenAI's tweet, in which a researcher copies ChatGPT output, runs it through GPT-3 – and the AI author is no longer recognized.

Also, the true positive rate of only 26% (i.e. what fraction of AI generated text was recognized as not having a human author) is rather sobering. Though this is hardly surprising: AI generated text is optimized to appear natural.

I touch on the topic in previous analyses at sentientsyllabus.substack.com – and misconduct will be the focus of the next newsletter at sentientsyllabus.substack.com

Very insightful coverage by @susan_dagostino in "Designing Assignments in the ChatGPT Era". If only more of our discourse could have that level...

insidehighered.com/news/2023/0

Though: disclaimer: I had the honour to add a few ideas I wrote for the Sentient Syllabus project sentientsyllabus.org

First time I actually was laughing with ChatGPT. There's this announcement that today's upgrade of the system came with improved factuality. help.openai.com/en/articles/68

Of course I had to test it. Nice. It can continue Fibonacci sequences, and it can continue 1, 0.5, 0.25, correctly. But it estimates rather than computes successive square-roots and can't really explain what it was doing there. Whatever.

Then I ask it for sources for the comparison between "specifications grading" and the "ungrading" movement. I get five books ... none of which actually exist with exactly those title, authors and years. My, my. Though this is not entirely useless, as always since: ...

Human: ... if only those existed. But yes, similar titles float around in that field, and some of those authors have made their careers with the topic. Thank you.

ChatGPT: You're welcome! I apologize for the confusion caused by the incorrect sources. It's great to hear that similar titles and authors do exist and are actively working on these topics.
----

Oh my! Indeed. That's a relief to both of us. 😂😂😂 Similar titles exist.
----

I'm sure factuality has improved if they say so. I'm also sure that there's scope for more improvement. Like an actual search for sources. Two more months, right?

sentientsyllabus.substack.com

Norms for Publishing Work Created with AI dailynous.com/2023/01/30/norms

The dailynous picks up on the topic we covered two days ago at sentientsyllabus.substack.com/ ... with a discussion that is a bit deeper than many others. Still, much more to be said.

Getting right is harder than one would think.

I just posted an analysis on AI authorship over at :
sentientsyllabus.substack.com/

Key takeaways include: that arguments based on our usual criteria of /contribution/ and /accountability/ are brittle; that the problem lies with authorship being a vague term (cf. sorites paradox); that we are using posterior reasoning to justify our intuitions; and that reliable intuitions about the actual nature of the emergent(!) source-AI-author system need more work. A practical policy proposal rounds it off: empower the authors, use meaningful acknowledgements, quantify contributions.

Reuters:

At a time when Google and Bing are gearing up for the tech showdown of the decade ...

they are laying off five and six percent of their (human) workforce.

reuters.com/business/google-pa

- How much is too much?

As far as is concerned, the answer is simple: "Assistance from an system is too much when it interferes with the educational objectives, or the assessment of a submitted work."

But really, there is nothing simple here: this question goes to the core of what we mean by "educational objectives", what is the "work" that we are assessing, and how we are assessing in the first place.

I unravel this in the latest post at the Project" - it's a bit of a deep dive, before we come up for air again. In the end, it's all about .

sentientsyllabus.substack.com/

I just came up with a use of that promises to be a huge time-saver: reformatting of bibliographies. For reasons, I never follow the standard styles when I write. Of course editors usually insist on whatever. So here's ChatGPT:

Human: Please reformat the following reference in MLA style:

Alster, Bendt (1973). “An Aspect of ‘Enmerkar and the Lord of Aratta’”. Revue d'Assyriologie et d'archéologie orientale 67(2): 101–110.

ChatGPT:
Alster, Bendt. "An Aspect of 'Enmerkar and the Lord of Aratta'." Revue d'Assyriologie et d'archéologie orientale 67, no. 2 (1973): 101-110.

Human:
Now give me a bibtex for that.

ChatGPT:
Here is a BibTeX entry for the reference you provided:

@article{alster1973aspect,
title={An Aspect of 'Enmerkar and the Lord of Aratta'},
author={Alster, Bendt},
journal={Revue d'Assyriologie et d'archéologie orientale},
volume={67},
number={2},
pages={101--110},
year={1973}
}

... that looks correct to me – of course it will need proofreading, but it looks like it will save a lot of pain.

🙂

Reacting to news posted today by @abcxyz @donwatkins @techhelpkb @johnew

---
While we were busy thinking things – it appears that the NYC Department of Education is embarking on a fascinating experiment with its staff and students zdnet.com/article/chatgpt-was-

We will learn a lot from this.

For educators who are more convinced by the alternative, we have resources at the Sentient Syllabus Project sentientsyllabus.org

Just finished a number of updates of our resource files at the Sentient Syllabus Project – summaries posted at sentientsyllabus.substack.com/. In brief: New sections on "Critique and Improvement" and "Personalized Tutoring" in the Course Activities file; most writing went into "Understanding AI Issues": training data and process, the Sysiphus work of plagiarism detection (hint: the tools are unlikely to work); first perspectives on energy use, machine use for better training - and the resulting "Kurzweilian" acceleration.

ChatGPT's Achilles Heel

I just posted a discussion of ChatGPT's major weakness for the Sentient Syllabus Project: sentientsyllabus.substack.com/

It falls prey to its "Schrödinger Facts".

In a nutshell: ask the AI for sources - and then check whether they actually exist. They usually don't. Requests for specific, provable sources turn out to be ChatGPT’s achilles heel.

That's a big deal: (a) we can (and should) always require students to attribute ideas anyway, and chasing down the source of a ChatGPT claim may be more work than working from an actual source; (b) ChatGPTs information is not useless though - and can be very valuable to point to the right direction; (c) while we become masters in attribution, the collaboration benefits in other ways, and the end result is improved.

Find our resources on academia and the new AI at sentientsyllabus.substack.com and on sentientsyllabus.org

... and share this information to whoever might find it useful.

Three principles for the academy in the era of digital thought:

1: An AI cannot pass a course.

2: AI contributions must be attributed and true.

3: AI use should be open and documented.

I explain why in our latest post for the Sentient Syllabus Project. Have a look at sentientsyllabus.substack.com

The Sentient Syllabus Project
Charting a course for academia in the era of digital thought.
--

I've been really busy over the last days rolling this out: the large-scale availability of literate AI has imploded the academy as we know it. We need to re-orient. How? I founded The Sentient Syllabus Project to share concrete, practicable advice. Check it out at sentientsyllabus.org and at sentientsyllabus.substack.com/

Most importantly: this includes text samples how to start re-thinking and adapting our winter-term syllabi by next week. This is needed _now_. Drop in, have a look, and if you know someone who might benefit please share.

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.