The NYTimes (famous for publishing transphobia) often has really bad coverage of tech, but I appreciate this opinion pice by Reid Blackman:
https://www.nytimes.com/2023/02/23/opinion/microsoft-bing-ai-ethics.html
>>
Bettina Wegner „Die Kinder des Fleischers“ https://youtube.com/watch?v=Pyf7cNJT8mA&feature=share
Das Bildungsministerium in NRW
hat einen Handlungsleitfaden für den Umgang mit #ChatGPT und Co. veröffentlicht. Zentrale Aussage: "Wir möchten Sie daher bitten, sich offen und konstruktiv mit den neuen Möglichkeiten auseinanderzusetzen und diese im Unterricht zu thematisieren."
Was ich zusätzlich lobenswert finde: Der Leitfaden ist wirklich ein hilfreiches FAQ, dazu gibt es eine PPT-Präsentation, die in Konferenzen verwendet werden kann, einen Video-Vortrag von der Expertin Doris Weßels
und einen Moodle-Kurs mit Unterrichtsbeispielen. Solche Unterstützung wünsche ich mir künftig öfter! #FediLZ https://www.schulministerium.nrw/textgenerierende-ki
@Samuelmoore
it's the infrastructure, the infrastructure, the infrastructure
@MissingThePt I see what you did here. So: Yes!
“AI” hasn’t been “taught” anything. It has learned the patterns it produces from our output - the output you seem to think is nothing but ungrammatical garbage. It “knows” absolutely nothing about grammar beyond what is captured in the patterns in the language *we’ve* produced. And to express immediate skepticism that a person could produce writing that meets your standards of spelling and grammar is so gross, and entirely misses the point of my whole thread.
And I know that the students that will be hit hardest are the ones we already police the most: the ones who we think “shouldn’t” be able to produce clear, clean prose. The non-native speakers. The speakers of marginalized dialects. So I’ve been pushing against every suggestion that we should adopt these tools.
As a writing prof, I’ve been making this point as forcefully as I can in as many contexts as I can, because I have seen too many people uncritically accepting the hype - in both directions, about the capabilities of LLMs and about the abilities of automated detectors - and my biggest concern is that schools will decide to use automated detectors and put their students through “reverse Turing Tests”.
I've been experimenting with GPTZero, which claims to be able to identify whether text was produced by an LLM or a human.
Everything I know about how LLMs work and how humans produce language (I know a non-trivial amount about both, given my background in computational linguistics and psycholinguistics) tells me that you will never, ever be able to build something that can reliably distinguish between the two. And sure enough, GPTZero fails miserably. So many false positives.
To future applicants to Maître de conférences/Professeur positions in French universities:
A new law just passed (February 6, 2023) whose article 29 makes a significative difference:
from now on, all administrative documents have to be translated into French, as well as the analytic presentation of the works, and all works, papers, documents in a foreign language have to be supplemented by a French abstract. Otherwise, the application will be declared inadmissible.
https://www.legifrance.gouv.fr/jorf/article_jo/JORFARTI000047183328
#2741 Wish Interpretation
"I wish for everything in the world. All the people, money, trees, etc." "Are you SURE you--" "And I want you to put it in my house."
https://xkcd.com/2741/
"the history and research of intelligent tutors show that using the right design to harness the power of #chatbots like #ChatGPT can make deeper, individualized #learning available to almost anyone."
#tutor #AI #education
https://theconversation.com/chatgpt-could-be-an-effective-and-affordable-tutor-198062
Writing Researcher and Computational Linguist | Lives in Vaud, Zurich, and Uckermark | «Isch no schön – hamers aber e chli grösser vorgstellt.»