Oh that was FAST fast.
GPT-2 output detectors work for GPT-3 and GPTChat, which means people who do stuff like turnitin plagiarism checks have a tool for this fast-moving new frontier.
Personally, I do a choose-your-own adventure/create-your-own-assignment model for most of my non-intro classes, at this point, and frankly i'd be inclined to make this into an assignment, in its own right. It could look something like:
"Generate a GPTChat output on [topic(s)], then expand on and correct the output with specific references and citations from class readings and lectures. Turn in the prompt, the original, and your corrections as your full submission."
Reframe it like that and it helps them think about what the GPT platforms are and do, and then you can integrate that into the whole of the semester's work, rather than making it an arms race of plagiarism and "Gotcha" tools.
Guess I probably should have appended this one as a reply here. My bad.
https://ourislandgeorgia.net/@Wolven/109480176205212773
Every wild swinging hot take on the "AI" art and GPT situation goes fucking everywhere, while the few nuanced takes I've seen struggle to make it around.
This shit's neither harmless nor inevitable, AND doesn't have to be made in the harmful ways corporations will tend to making them. Algorithmic applications are primarily created from within and in service of hegemonic, carceral, capitalist profit motives, meaning they act as force multipliers/accelerants on the worst depredations of said. That goes for art and language as much as it goes for housing and policing.
Neither tech nor play are neutral categories and these tools COULD be used to help a lot of people. But until we get the capitalist shit off of them and place meaningful regulation—that is regs specifically designed to safeguard the most marginalized and vulnerable— around the frames they're going to keep doing a lot of harm, too.
LLM's are trained on data that is both unethically sourced AND prejudicially biased in its content and they operate by means of structures that require vast amounts of natural resources. But they COULD and SHOULD be made differently
"AI" art tools can &do help people who either never could or can't any longer do art in more traditional modes create things they feel meaningfully close to. But they're also being trained on pieces by living artists scraped without their knowledge, let alone their consent or credit
I'll say it again (said it before here https://sinews.siam.org/Details-Page/the-ethics-of-artificial-intelligence-generated-art): The public domain exists, and it would have been a truly trivial matter for the people who created "AI" art tools to train them and update them on public domain works— they update literally every year. But that's not what we have, here.
GPT checkers are apparently already being deployed to try to catch GPT cheaters, but, again (https://twitter.com/Wolven/status/1599987850405371904): Why be in an adversarial stance with your students when you could use the thing to actually, y'know, teach them how to be critical of the thing? Additionally the GPT Chat checkers seem have problems with original text written by neurodivergent individuals (https://kolektiva.social/@FractalEcho/109480097279253524) and other text in general (https://www.aiweirdness.com/writing-like-a-robot/), so like many automated plagiarism checkers and online proctoring softwares their deployment directly endangers disabled and otherwise marginalized students in our classes.
Uncritical use of either GPT or "AI" art tools, or their current proposed remedies, does real harm, because the tools are built out of and play into extant harmful structures of exploitation and marginalization. But we these things can be engaged and built drastically differently.
But in order to get the regulations and strictures in place to ensure that they ARE built differently, we have to be fully honest and nuanced about what these "AI" systems are and what they do, and then we have to push very hard AGAINST the horrible shit they're built in and of.
So it seems like a lot of people don't know about the "Choose Your Own Adventure" assignment model. I use two different variations, and they're both pretty straightforward, actually:
a) You create a grading model with a set amount of points or a full 100 percentage calculation, then you create a range of potential assignments which can be chosen from and combined to reach those points/that percentage.
Variation (b) is the "Study guide" model wherein you give the students the instruction to complete a creative project which would help them study AND help them if they needed to communicate the material to other people; then you leave the framework COMPLETELY open to them, and let them give you what they got.
You can combine these by folding a "Create Your Own" option into variation (a).
I've gotten D&D campaigns, tarot decks, books of poetry, all kinds of stuff. A lot of people fall back on pre-made game models you can find online, but last semester I even had students write webpages and code up apps and executable scripts (yes i read the code, before attempting to run it).
And if you write the prompts for Variant (a) correctly, then even if the students don't choose the options, they'll learn something from them.
Due to gross ethical mismanagement by openAI, I'm removing this experimentation option from the syllabus, and I'll be replacing it with something else. Details TK.
https://ourislandgeorgia.net/@Wolven/109667073653671712
@Wolven it’s probably naive to believe anything a large corporation says publicly , the layoffs in particular are not likely to be tied to a pre existing investment in openai but rather imho more likely to be a coordinated layoff campaign designed to keep salaries low . So any worker activity I think is more likely than “investing in ai” as a credible reason …
@tonic Layoffs were happening around the sector as a means to safeguard profits, long before msft announced their openai partnership, but the movement and automation potential*of* openai's tools made those layoffs easier to sell and justify.
The "pivot to ai" was always on msft's docket (you can look at their research funding and publications over the past 5 years to see that), but openai's rising star and the broader tech sector layoffs gave them good cover to fire teams they likely wanted to fire anyway.
"Naïve." Mm. Have a good one.