Sitting in on this week's Future Trends Forum with @bryanalexander. The conversation on ChatGPT and education was so lively last week that Bryan scheduled a part two for today.

I believe you can jump right in here: shindig.com/login/event/chatgp.

Hearing first from Barry Burkett, head of Sikanai, a start-up developing tools to detect and prevent ghostwriting (by humans or machines). Details: sikanai.com/.

And second from Caroline Coward, a librarian at NASA JPL! Very cool job.

Caroline is pointing to the ethics of AI. How are these systems developed? What biases are built in? How can we know and respond to that?

Third on "stage" is @Readywriting, who has a fantastic collection of resources on AI and teaching on Zotero. zotero.org/groups/4888338/chat

Lee is imagining the impact of ChatGPT in language learning. "There's no way your English is that good!" could be quite an accusation. Or we could invite students to use these tools to improve their language learning.

Lee (@Readywriting) mentions students with ADHD, who sometimes struggle to get a project started. Maybe ChatGPT or similar tools can help these students get started?

Lots happening in the Future Trends Forum. I'm hitting my cognitive load with the Shindig chat *and* liveblogging here on Mastodon!

One activity I'd like to try with students: Have students ask ChatGPT about something (sports, a hobby) the student knows really well. This would, hopefully, help students appreciate what ChatGPT can and can't do for them.

Another activity from Matt Kaplan (UMich CRLT): Have students collaboratively annotate the output of ChatGPT in Google Docs or Hypothesis or some other annotation tool. #annotation

Lee (@Readywriting) notes that when the task is algorithmic in nature, the algorithms will eventually do it better.

If your ad copy, say, is basically Mad Libs, then this more sophisticated version of Mad Libs will write better ad copy than you.

In response, George @veletsianos: if HE is preparing students for formulaic thinking, writing, work, etc, then of course chatGPT will do better. That's more a reflection of the shortcomings of higher ed than the ills of chatGPT.

Next on stage: Brent Anders, institutional research at the American University of Armenia.

1, ChatGPT version 4 is coming, and it will be way better.

2, Do we need to change our definitions of plagiarism? Do we need students to note that they used ChatGPT or Grammerly as a co-author?

Caroline from NASA JPL points to new standards in open science, where the goal is to be transparent about your experimental design before you run the experiment. We might see similar transparency moves around the use of AI tools

Brent: Why isn't this technology already in Microsoft Word? And do we need to cite Word when we use it in a publication?

Lee: It's like a bigger, badder Clippy!

Lee @Readywriting: If you can coach an AI tool into writing a really good something, then does a student's coaching of that AI tool possibly show that student's understanding and expertise?

Lee put it better than that, but that's the main idea. [Insert mind-blown emoji here.]

I really like thinking about ChatGPT as the audience for a student. That's a potentially great way to put the student in a different position re: authority on a subject.

This is a really useful framing for a whole set of student activities and assignments.

Matt Kaplan again, thinking about equity and access to tools that definitely aren't free: Would institutions ever consider institutional licenses for something like this to make it fully available when it does become a for purchase option?

How many of you teaching center people out there are planning events on AI and teaching this spring?

@derekbruff Some #FacDev #EdDev folks also chatted about the impact of #ChatGPT & #AI on #teaching on Wednesday. Here's a copy of the notes: docs.google.com/document/d/1lN
I added some resources I've come across, like the Think-Pair-ChatGPT-Pair-Share idea someone shared on Twitter.
Someone also asked ChatGPT to create an outline for a #faculty workshop on the topic. I noticed it didn't include anything about downsides or concerns 😉
#HigherEd @edutooters @academicchatter

@edutooters @academicchatter @derekbruff @dougholton my major concern is that AI writing will reduce incentives for writing to learn activities. Writing without artificial aids has great cognitive and metacognitive benefits.

Follow

@grabe @edutooters @academicchatter @derekbruff @dougholton
Agreed, which is why (IMO) we rethink writing and integrate it into teaching rather than having its role be assessment alone. Of course, that is something I was taugh at rather progressive undergraduate and graduate education schools.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.