There has been a hiatus in my Sentient Syllabus Writing, while I was lecturing and thinking through things.
But I have just posted an analysis on the #Sutskever / #Altman schism at #OpenAI and hope you find it enlightening.
Enjoy!
#ChatGPT #GPT4 #HigherEd #AI #AGI #ASI #ChatGPT #generativeAI #Bostrom #AI-ethics #Education #University #Academia
Part 2 of the #SentientSyllabus course redesign project has been posted: Academic visions.
Before we actually look at contents and course details, we embark on a substantial visioning exercise. Thinking of academia in the future: how will the nature of work change, and what does that mean for the economy? How will our course materials respond? How will learning itself change? And how will our roles as instructors change?
We extrapolate from current trends to derive the contours of a reimagined academia. One of the interesting conclusions is that although we will be using more technology, that technology will become more transparent. That's a simple consequence of natural language instructions.
Finally we consider different pedagogical frameworks – it turns out that #KnowledgeBuilding has some interesting ideas to contribute, for example the concept of "Epistemic Artefacts" ...
It's getting interesting. Have a look
https://sentientsyllabus.substack.com/p/starting-over-2
#ChatGPT #GPT4 #HigherEd #AI #generativeAI #Education #University #Academia #CourseDesign #Syllabus
I just got word that Turnitin has allowed our institution to opt out of the AI Detection features they were forcing on customers till Jan when they planned to start charging for the feature.
I've heard that any institution can opt out but you have to request to be put on the "suppression list"
They are not advertising this - you have to ask for it. They are providing no research or proof about the accuracy of their AI detection.
How to use AI to do practical stuff: A new guide https://oneusefulthing.substack.com/p/how-to-use-ai-to-do-practical-stuff #AI #ChatGPT #Bing (excellent guide)
It's not quite right to say that #GenerativeAI will transform the #university. It is we who change it – as faculty, as learners, as administrators. In the Sentient Syllabus Project, we are setting out to establish best-practice for such change, and we start with a single course.
https://sentientsyllabus.substack.com/p/starting-over-1
We have written analysis about the impact of AI on #Academia for several months now, and we looked at questions of #authorship, #AcademicIntegrity, and personalized instruction. Now we start putting this into practice. This new post maps out a course redesign which will proceed in steps until early July. There are many new developments and visions to integrate with longstanding experience, but the July date will also give others time to build on the results for their own fall term courses.
Actually, we plan to work on two courses - one in #STEM and one in the #humanities.
Welcome to join us along the way.
🙂
#Sentientsyllabus #ChatGPT #GPT4 #HigherEd #AI #Education #CourseDesign #Syllabus
Todays insight on #ArtificialIntelligence from
#ChatGPT #GPT4 #AI #MachineLearning #DeepLearning #NeuralNetworks #BigData
'Produce Javacript code that
creates a random graphical image that looks like a painting of Kandinsky' #GPT4 #ChatGPT #AIart #ai #mastoart #generativeart
https://arxiv.org/pdf/2303.12712.pdf
#ChatGPT exhibited a modest yet comparable affordance boundary at the scale of human body size, implying that a human-like body schema may emerge through exposure to linguistic materials alone https://www.biorxiv.org/content/10.1101/2023.03.20.533336v1
Alberto Romero who writes some of the most knowledgeable and thoughtful commentary on AI has just posted "GPT-4: The Bitterer Lesson".
https://thealgorithmicbridge.substack.com/p/gpt-4-the-bitterer-lesson
Originally an idea credited to Peter Sutton, the "Bitter Lesson" is that "humans have contributed little to the best AI systems we have built".
And with each iteration of GPT-X that contribution is becoming less and less. From bitter, to bitterer.
Worth a read.
In case you read the Guardian article about our #ChatGPT paper, I’d like to clarify that we didn’t actually ‘hoodwink’ anyone! The paper states clearly which parts were written by us and which by ChatGPT. That said, we have enjoyed people’s surprise when they get to the big reveal! https://www.theguardian.com/technology/2023/mar/19/ai-makes-plagiarism-harder-to-detect-argue-academics-in-paper-written-by-chatbot Do read the paper: https://www.tandfonline.com/doi/abs/10.1080/14703297.2023.2190148
We are at a pivotal time in history. Text, images, sound, movies cannot be trusted any more because they can be faked perfectly. At the same time trust in institutions is declining, also trust in scientific institutions. Time for education to step up. This should be our wake-up call #ChatGPT
New post for #SentientSyllabus, first impressions of #GPT4 via the #ChatGPT interface. Focus on consequences for #academia.
https://sentientsyllabus.substack.com/p/becoming-fibrous-gpt-4
Includes a summary of #OpenAI's live stream, first experiments with improved introspection, and a well-done text problem, and perspectives on the new features.
What's important for #HigherED?
(1) GPT-4 is good. Really. We still need to pursue a "An AI cannot pass the course" policy – but this has now become quite a bit harder. I am _not_ talking about making lectures AI-proof, I am talking about how we can teach students to surpass that level. This is a challenge.
(2) GPT-4 is good. That means it's also more fun to work with. Collaborative problem solving and tutoring are going to make for some very effective learning.
(3) GPT-4 is good. The (coming) ability to include persistent prompts ("system messages") is a game changer. This is not widely recognized, but it is a step to solve the #alignment problem, replacing authority with individual control.
There. I've said it three times... Have a look at the article, share and boost if original content about AI and academia is useful to you.
🙂
OMG, I’m having so much fun having a conversation in #Chinese with #ChatGPT acting as my Chinese teacher! 🤩
I had completely given up practicing Chinese these last 2 years. But yesterday, despite being rusty as F, I thought that maybe I should give it another go through #GPT4.
I am *not* disappointed.
#learnChinese #newParadigm
I just gave #chatgpt (the new version) 51 multiple choice questions from my midterm exam in #family #sociology and it got 49 of them right, or 96%. (The two it got wrong are debatable.) We covered historical demographic trends, sociological theories, definitions of race and gender concepts. I'll give some examples after my students take the test. Last year the students averages 80%.
I'm completely fascinated by GPT-4.
In the live-demo just now, within less than half an hour, the developer had GPT-4
- summarize a lengthy Document (new context-window size: 32,000 tokens !)
- write a python Discord-bot from scratch, run it in a Juypter notebook, debug it by simply pasting error messages into the GPT-4 dialogue, update its knowledge base by pasting the module documentation right from the website ... and get a working bot!
- use the bot to read images and interpret them in detail
- build a working website prototype complete with JS and CSS from a hand-drawn sketch
- and ingest 16 pages of tax code to perform a tax return calculation.
-----
This is amazing.
#ChatGPT #GPT-4 #generativeAI #Sentientsyllabus #HigherEducation #Academia
Here's a fun thing I just discovered tonight.
If you by mistake commit your Open AI API key to a public github repo - they will rotate the key for you automatically and notify you about the change.
This was within less than a minute from the time I committed the API key.
How cool is that? :-)
Stanford University finetuned the smallest #LLaMA model (7B) and say that #Alpaca (hehe) performs as well as #OPENAI’s text-davinci-003 model (essentially the basis for #ChatGPT). No model weights available though.
My guess … we are days away from someone else following exactly the same (well documented ) process and then releasing weights for all to use. On my M1 Pro MacBook Pro, the untuned 7B quantised model needs less than 5 GB RAM & processes 10 tokens per sec. 😎