Follow

Generative AI, such as ChatGPT, may be better viewed as putting together hypotheses, where testing either leads to corroboration or truthiness.

> The glitch seems to be a linear consequence of the fact that so-called Large-Language Models are about predicting what _sounds right_, based on its huge data sets. As a commenter put it in an already-months-old post about the fake citations problem: “It’s a language model, and not a knowledge model.”

> In other words, this is an application for _sounding like an expert_, not for _being an expert_ — which is just so, so emblematic of our whole moment, right? Instead of an engine of reliable knowledge, Silicon Valley has unleashed something that gives everyone the power to fake it like Elizabeth Holmes.

"We Asked ChatGPT About Art Theory. It Led Us Down a Rabbit Hole So Perplexing We Had to Ask Hal Foster for a Reality Check" | Ben Davis | March 2, 2023 at news-artnet-com.cdn.ampproject

Truthiness was coined by Stephen Colbert in 2005, and became legitimated as an entry in a dictionary by 2010.

> ... _truth_ just wasn’t “dumb enough.” “I wanted a silly word that would feel wrong in your mouth,” he said.

> What he was driving at wasn’t _truth_ anyway, but a mere approximation of it — something _truthish_ or _truthy_, unburdened by the factual. And so, in a flash of inspiration, _truthiness_, was born. [....]

> Five years later, _truthiness_ has proved to be no _bushlips_. It has even entered the latest edition of the New Oxford American Dictionary, published earlier this year, with Colbert explicitly credited in the etymology.

"Truthiness" | Ben Zimmer | The New York Times Magazine | October 13, 2010, cached at archive.is/lkEMX , original at nytimes.com/2010/10/17/magazin

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.