Oh that was FAST fast.

GPT-2 output detectors work for GPT-3 and GPTChat, which means people who do stuff like turnitin plagiarism checks have a tool for this fast-moving new frontier.

Personally, I do a choose-your-own adventure/create-your-own-assignment model for most of my non-intro classes, at this point, and frankly i'd be inclined to make this into an assignment, in its own right. It could look something like:

"Generate a GPTChat output on [topic(s)], then expand on and correct the output with specific references and citations from class readings and lectures. Turn in the prompt, the original, and your corrections as your full submission."

Reframe it like that and it helps them think about what the GPT platforms are and do, and then you can integrate that into the whole of the semester's work, rather than making it an arms race of plagiarism and "Gotcha" tools.

wandering.shop/@janellecshane/

Every wild swinging hot take on the "AI" art and GPT situation goes fucking everywhere, while the few nuanced takes I've seen struggle to make it around.

This shit's neither harmless nor inevitable, AND doesn't have to be made in the harmful ways corporations will tend to making them. Algorithmic applications are primarily created from within and in service of hegemonic, carceral, capitalist profit motives, meaning they act as force multipliers/accelerants on the worst depredations of said. That goes for art and language as much as it goes for housing and policing.

Neither tech nor play are neutral categories and these tools COULD be used to help a lot of people. But until we get the capitalist shit off of them and place meaningful regulation—that is regs specifically designed to safeguard the most marginalized and vulnerable— around the frames they're going to keep doing a lot of harm, too.

LLM's are trained on data that is both unethically sourced AND prejudicially biased in its content and they operate by means of structures that require vast amounts of natural resources. But they COULD and SHOULD be made differently

"AI" art tools can &do help people who either never could or can't any longer do art in more traditional modes create things they feel meaningfully close to. But they're also being trained on pieces by living artists scraped without their knowledge, let alone their consent or credit

I'll say it again (said it before here sinews.siam.org/Details-Page/t): The public domain exists, and it would have been a truly trivial matter for the people who created "AI" art tools to train them and update them on public domain works— they update literally every year. But that's not what we have, here.

GPT checkers are apparently already being deployed to try to catch GPT cheaters, but, again (twitter.com/Wolven/status/1599): Why be in an adversarial stance with your students when you could use the thing to actually, y'know, teach them how to be critical of the thing? Additionally the GPT Chat checkers seem have problems with original text written by neurodivergent individuals (kolektiva.social/@FractalEcho/) and other text in general (aiweirdness.com/writing-like-a), so like many automated plagiarism checkers and online proctoring softwares their deployment directly endangers disabled and otherwise marginalized students in our classes.

Uncritical use of either GPT or "AI" art tools, or their current proposed remedies, does real harm, because the tools are built out of and play into extant harmful structures of exploitation and marginalization. But we these things can be engaged and built drastically differently.

But in order to get the regulations and strictures in place to ensure that they ARE built differently, we have to be fully honest and nuanced about what these "AI" systems are and what they do, and then we have to push very hard AGAINST the horrible shit they're built in and of.

The chat gpt assignment i proposed at the top of this thread looks like this, this semester:

So it seems like a lot of people don't know about the "Choose Your Own Adventure" assignment model. I use two different variations, and they're both pretty straightforward, actually:

a) You create a grading model with a set amount of points or a full 100 percentage calculation, then you create a range of potential assignments which can be chosen from and combined to reach those points/that percentage.

Variation (b) is the "Study guide" model wherein you give the students the instruction to complete a creative project which would help them study AND help them if they needed to communicate the material to other people; then you leave the framework COMPLETELY open to them, and let them give you what they got.

You can combine these by folding a "Create Your Own" option into variation (a).

I've gotten D&D campaigns, tarot decks, books of poetry, all kinds of stuff. A lot of people fall back on pre-made game models you can find online, but last semester I even had students write webpages and code up apps and executable scripts (yes i read the code, before attempting to run it).

And if you write the prompts for Variant (a) correctly, then even if the students don't choose the options, they'll learn something from them.

Honestly even if you just give them a view into the sheer range of possibilities for Variant (b), then the students can learn something from the prompts and the assignments, no matter what.

(I learned a variant of this model from Ashley Shew, roughly 5 and a half years ago now, and I've loved it ever since.)

Due to gross ethical mismanagement by openAI, I'm removing this experimentation option from the syllabus, and I'll be replacing it with something else. Details TK.
ourislandgeorgia.net/@Wolven/1

This gross ethical mismanagement by OpenAI right here: ourislandgeorgia.net/@Wolven/1

Add this in with microsoft announcing their "increased investment in partnership" with OpenAI literally days after MSFT fired roughly 11,000 people— a move MSFT specifically said they were doing that as a means to cut costs to save revenue to funnel into "AI" research; so that is just exactly what this *is*.

MSFT fired a BUNCH of their in-house "AI" people— among thousands of others— because a "partnership" where they farm out applications and capabilities research to OpenAI was cheaper and less time- and resource-intensive.

Put all of those moves together, and you have the recipe for some really bad shit about to go down and I don't want my students contributing to the refinement data and models of companies that do and and condone shit like that.

Also? I don't want to hear any more about anyone's skynet/black mirror/i, robot fantasy terrors because those ONCE AGAIN miss the much more crucial "MASSIVE MULTIBILLION-$ CORP which literally JUST demonstrated how little it cares about humans is buddying up with a grossly exploitative 'AI' firm" angle.

Follow

@Wolven it’s probably naive to believe anything a large corporation says publicly , the layoffs in particular are not likely to be tied to a pre existing investment in openai but rather imho more likely to be a coordinated layoff campaign designed to keep salaries low . So any worker activity I think is more likely than “investing in ai” as a credible reason …

· · Tootle for Mastodon · 1 · 0 · 0

@tonic Layoffs were happening around the sector as a means to safeguard profits, long before msft announced their openai partnership, but the movement and automation potential*of* openai's tools made those layoffs easier to sell and justify.

The "pivot to ai" was always on msft's docket (you can look at their research funding and publications over the past 5 years to see that), but openai's rising star and the broader tech sector layoffs gave them good cover to fire teams they likely wanted to fire anyway.

"Naïve." Mm. Have a good one.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.