#Humata is a tool that will answer questions about a pdf you upload. That may be superbly useful ... cf. https://qoto.org/@boris_steipe/109841683922034368
Yes, I just read that - it's a very useful exposition about what really happened in these "exams". Unfortunately, that doesn't mean ChatGPT is _not_ able to pass most of our exams. Because, disturbingly often it does. And if that would not be the case, we wouldn't need to be worried about it.
The reality is: we need to rethink assessment, and to do that we need to be clear about our teaching objectives. I write about here: https://sentientsyllabus.substack.com/p/how-much-is-too-much
But @melaniemitchell 's article is important for our attempts to separate the signal from the noise.
#SentientSyllabus #ChatGPT #HigherEd #AI #Education #Syllabus
.@melaniemitchell wrote an excellent post about testing #ChatGPT on exams: https://aiguide.substack.com/p/did-chatgpt-really-pass-graduate. Another question is if the consequences of using #AI systems like ChatGPT may be similar to previous observations of the 'irony of #automation' -- see attached page: James Reason. #Human #Error. For example, anecdotal evidence suggests that people without a good sense of orientation perform worse after relying on a navigator in their car for a few years.
I have a mantra that I throw around for that: its' having your computer think _for_ you, not _with_ you.
I disagree.
That's only when your premise is that it knows enough. But that also implies you give up, and concede a computer can think _for_ you. And that the value of whatever you do has now dropped to zero.
When your premise is that you can do better, the AI's baseline is your starting point. Not "knowing enough to challenge it" is not an option. So you start poking the arguments and taking them apart. And you can actually ask it for help when necessary - that's when you get it to think _with_ you.
Using computers to think with us, or for us ... that's what it boils down to.
It hasn't sunk in yet, but we learned last week that the whole discourse of #LLM critique since November has been aiming behind the ball. #Bing and Google Search are now going to compete to address factual accuracy. Meanwhile the brainstorming function is already integrated in Word if you use the Edge browser.
Image: #ChatGPT integration into Office365.
@TedUnderwood
Exactly ... that's where the money is for MS. As per June 2022 revenues – Office 365: 32%; Azure: 40%; Windows: 28%; Ads: zero.
... and this is why Goggle is looking at data integration - not search. Because this is where their money is.
https://sentientsyllabus.substack.com/p/reading-between-the-lines
Ah - I missed the iconic citation. So does that mean you don't believe that? Or you do? 🤔
But it's not plagiarism.
🙂
It's a fantastic sparring partner, exactly _because_ it is so fluently mediocre. You'll get the vanilla response, and then you can challenge yourself to realize why this is not good, and how to go beyond that. I find it great to hone my own thinking - not to substitute for it.
Well not the on-the-ground anecdotes ... but trying to wrap my head around it in a principled way, and getting ready to completely rewrite my fall-term courses: here's the misconduct angle.
https://sentientsyllabus.substack.com/p/generated-misconduct
Actually - no.
I invite you to have a look at a more differentiated analysis. The topic is a bit important.
https://sentientsyllabus.substack.com/p/silicone-coauthors.
Cheers - 🙂
Inspired by the latest piece by Justin Weinberg at the Daily Nous (@DailyNous) on good uses of #ChatGPT, I tried out #Humata, a LLM powered PDF reading tool.
I am actually a bit excited: I uploaded a recent publication of ours, and I asked it typical questions like "what are the main points?", "how does this argument follow from that?" etc. The answers I got were mostly relevant, but somewhat obvious and generic, and often missed essential points and subtle implications.
Actually, that's exactly how you feel about what your reviewers have to say.
And this is so cool: we have a tool for pre-review! Is an argument misunderstood? Make it more clear. Was an implication not realized? Spell it out. Did a subtle thought get lost? Put it into a separate paragraph. Until you feel that even the algorithm gets it.
I think this is where the real applications are: whether a sparring partner in a socratic dialogue, or a virtual reader – the AI is invaluable to help you shape and hone and improve your own thoughts. Not as a substitute for thinking.
--
https://dailynous.com/2023/02/08/how-academics-can-use-chatgpt/
Useful piece by Justin Weinberg at the Daily Nous (@DailyNous) on good uses of #ChatGPT for academics. I was not aware of readers that can summarize PDFs _and_ point out the source of particular points in the summary in the original. That's interesting.
https://dailynous.com/2023/02/08/how-academics-can-use-chatgpt/
Uri Gal wrote up a whole list of inaccuracies and outright lies. I am in touch with him, and the editors in Australia ... they responded that they're looking into it, he said nothing so far. I will post here what comes of it.
ChatGPT, Chatbots and Artificial Intelligence in Education - Ditch That Textbook https://ditchthattextbook.com/ai #AI #ChatGPT #EdTech #Edutooters | @edutooters @rickweinberg @tclarkeee
I prefer to see it as a challenge.
It's clear that he AI's capabilities will grow - but can't we always maintain an edge by standing on its shoulders? Seeking out those opportunities, and demonstrating their utility is actually fascinating.
Take care!
My #ReadingBetweenTheLines of Google's live presentation in Paris yesterday.
It's not about a head-to-head between Bard and Bing at all. The battleground will be data, integration, and Augmented Reality everywhere. Also, find an interesting implicit take on copyright: "interpretation".
Looking at all this, I can't shake the feeling that we did not want AI powered search after all, but ...
https://sentientsyllabus.substack.com/p/reading-between-the-lines
#SentientSyllabus #ChatGPT #HigherEd #AI #AR #AugmentedReality #Bard #Bing
The answer is probably: no. What other colleagues have advised is to have the student explain their arguments and responses, and then confront them if their understanding is not consistent with authorship. If you can't get a "confession" however, the detectors' false positive rate is just too high for proof. Then you can only fail them on the quality of content.
I cover detection in my "misconduct" essay – but not very deeply. The tools I have seen are disappointing, and the technology is in flux.
I am always sorry to hear this is happening. It's so draining 😞
https://sentientsyllabus.substack.com/p/generated-misconduct
Lovely! But why forget #ChatGPT? It actually turns out to be a decent Up-Goer Five Text Author: Here is its explanation of transformer models.
"Computers can help people understand and make things from words. They look at many sets of a type of thing, like talking, and find ways the different parts go together. Then, when given new words, the computer can use these ways to make a new type of talking, like one that makes sense and answers a question. This helps with jobs like talking to someone in a different way or making new stories."
🙂
(It took about three prompts of coaxing - but it knew the concept, and was able to substitute forbidden words when they were blanked out. Fun.)