@ct_bergstrom Great shots but hope you have insulated waders.
Wharton professor Ethan Mollick has an interesting piece on how easy it was for him to create an entirely-synthetic video of him lecturing, using AI text, voice, and video generation tools.
Men’s Journal (which is apparently a real thing published alongside Sports Illustrated) published an AI-generated article that contained 18 errors and unsubstantiated claims passed on as facts.
The article was on low testosterone, of all things, which is a topic already rife with misinformation
https://futurism.com/neoscope/magazine-mens-journal-errors-ai-health-article
@Riedl Another fun one where Google incorrectly parsed a two column table. Our LLM future is looking just great!
@Riedl I ran into a similar uncorrected Google summarization error recently where the summary was the opposite of the article's intent. https://qoto.org/@twitskeptic/109821555544485858
@timbray Don't miss your chance to scoop up some prime number satoshis while you still can!
Google's public release of ChatGPT competitor "Bard" is going well. https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/
@ct_bergstrom My first try was the "Attention is all you need" paper from Google (65k citations) that introduced transformers for NLP. Results:
@ct_bergstrom Let's start a contest to see who can come up with the worst names for new academic search engines!
My first lame attempt for an astronomy only site: "Astrocite".
@ct_bergstrom In wait and see mode. A little skeptical due to lack of transparency on funding and business structure and the controversy around Bouzy (which may or not be real).
Was this NYTimes article written by Yann LeCun? Or one of Meta's chatbots? Not the biggest fan of ChatGPT, but the idea that Meta's Galactica was equivalent to ChatGPT and the former failed due to antipathy to Meta is just nonsense. It failed because it didn't perform! https://www.nytimes.com/2023/02/07/technology/meta-artificial-intelligence-chatgpt.html
A look into our enshittified LLM (Large Language Model) search future. My wife and I were curious about the etymology of 'snob', so we googled it and got this LLM summarized answer from Google's LLM "enhanced" search engine:
"Where does the word snob originate from?"
"The word snob is said to have arisen from the custom of writing “s. nob.”, that is, 'sine nobilitate'"
The problem is that the source (Merriam-Webster) used this as an example of what they called a "spurious etymology" - a fake answer. The LLM ignored this and hallucinated the incorrect answer. Someone who casually referenced this would walk away misinformed.
Can you imagine this happening with, say, medical software? I can and it's not good.
Microsoft just released a demo of BigGPT-Large, which they define as "a domain-specific generative model pre-trained on large-scale biomedical literature, has achieved human parity, outperformed other general and scientific LLMs, and could empower biologists in various scenarios of scientific discovery."
Here's the response to the first question that I asked: @ct_bergstrom @emilymbender
@ct_bergstrom @emilymbender @timnitGebru Yann also seems to believe that there's a linear relationship between the non normalized # of neurons in a brain and overall intelligence. The arrogance of some of these AI researchers is remarkable.
@ct_bergstrom @carrickdb @pluralistic I've been worrying about enshittification of chatbots like ChatGPT. I think it's terrifying once you start thinking about the ad placement possibilities for LLMs due to the ability to finetune models to subtly increase bias towards products in a way that's difficult to detect.
Even more disturbing given the rumors that Microsoft is rewriting Bing to integrate ChatGPT.
Unprofessional data wrangler and Mastodon’s official fact checker. Older and crankier than you are.