Men’s Journal (which is apparently a real thing published alongside Sports Illustrated) published an AI-generated article that contained 18 errors and unsubstantiated claims passed on as facts.
The article was on low testosterone, of all things, which is a topic already rife with misinformation
https://futurism.com/neoscope/magazine-mens-journal-errors-ai-health-article
Google's public release of ChatGPT competitor "Bard" is going well. https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/
Was this NYTimes article written by Yann LeCun? Or one of Meta's chatbots? Not the biggest fan of ChatGPT, but the idea that Meta's Galactica was equivalent to ChatGPT and the former failed due to antipathy to Meta is just nonsense. It failed because it didn't perform! https://www.nytimes.com/2023/02/07/technology/meta-artificial-intelligence-chatgpt.html
A look into our enshittified LLM (Large Language Model) search future. My wife and I were curious about the etymology of 'snob', so we googled it and got this LLM summarized answer from Google's LLM "enhanced" search engine:
"Where does the word snob originate from?"
"The word snob is said to have arisen from the custom of writing “s. nob.”, that is, 'sine nobilitate'"
The problem is that the source (Merriam-Webster) used this as an example of what they called a "spurious etymology" - a fake answer. The LLM ignored this and hallucinated the incorrect answer. Someone who casually referenced this would walk away misinformed.
Can you imagine this happening with, say, medical software? I can and it's not good.
Microsoft just released a demo of BigGPT-Large, which they define as "a domain-specific generative model pre-trained on large-scale biomedical literature, has achieved human parity, outperformed other general and scientific LLMs, and could empower biologists in various scenarios of scientific discovery."
Here's the response to the first question that I asked: @ct_bergstrom @emilymbender
It's not just the engagement-maximizing algorithms....
--
I wrote a thread over at post.news with this title, describing a new preprint about the sources of online toxicity. For those who want to read the formatted version there, here's the link: https://post.news/article/2KwklDyJQ7mRkNlrvRdeUh70Ej2
For those who want to read it here, I'll recreate it below.
Interesting idea for detecting generative AI content (ChatGPT) by generating multiple rephrases.
Only works, though, if you have access to the original model.
https://twitter.com/_eric_mitchell_/status/1618820358043475969?s=20
If The Last of Us is making you curious about the real Cordyceps… https://www.nytimes.com/2019/10/24/science/ant-zombies-fungus.html
If you've ever spent any time in Ann Arbor (or even if you haven't) this is a fun, if marginally condescending, article if you're a certain age. https://www.nytimes.com/2023/01/12/style/ann-arbor-geezer-happy-hour.html
Lolsob. One of my worst experiences as a server was a huge group Sunday after church who left a Bible verse that looked like a folded $20. $2.13 an hour and they took up my whole damn section. https://www.theonion.com/new-square-feature-allows-customers-to-tip-with-bible-q-1849855066
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.