Follow

From my friend David Gewirtz at @zdnet:

zdnet.com/article/can-ai-detec

What clinches the ChatGPT-ness of an essay or a passage of text, for me, is the absolute lack of any literary motivation. Whatever writes a paragraph appears to me to be fitting phrases into a framework. Its selection of phrases may have come from any number of other published items on the same material. But its only criteria for choosing from that pool appears to be semantic. It has a phraseology it's trying to fulfill, and it will pull from the grab bag until it finds the segment that fills it.

I can imagine an algorithmic procedure for fitting the pieces of a jigsaw puzzle together, that may work the same way. Brute-force selection at rapid speed may accomplish a solved puzzle at some point. Thing is, even though the puzzle is solved, it won't reveal any trace of the fact that the algorithm wasn't really trying to solve the puzzle. No, it may not have even referenced the cover photograph on the box.

By comparison, ChatGPT's solved puzzle, of you will, leaves a trace. There's a complete absence of a literary motivation - the feeling we get as readers that the writer is invested in you absorbing and believing what it's saying. What's transparent here is the pedantic, one-foot-before-the-other methodology that characterizes every sentence it manufactures.

That, in the end, is the key to detecting ChatGPT's signature in its product. There's nobody home. If a professor or teacher inspires students to write, either their confidence in their ability to fulfill expectations, or their dis-ease in doing so, will be apparent from the work itself. Both motivations produce a lack of homogeneity - a certain welcome discontinuity in the paragraph structure.

I submit we can rely on that, as our signal of originality and authenticity.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.