Show newer

@nailbomb3 @lzvolk Saw him in Seattle in the mid 90s. Just walked out on the stage by himself and played. Spoke one, maybe two sentences to the audience. It was great.

@tb Bull market for anodyne corporate PR statements!

Joe boosted

@mcdanlj @JamesGleick That might be true if Elon was forcing everyone to abandon SMS based 2FA, but Twitter Blue users are exempt. Like everything Elon, it makes no sense.

@ct_bergstrom FWIW, since I'm in a contentious mood this afternoon, I disagree that hallucinations are necessarily pathological. A couple of counter examples are sleep onset/awakening hallucinations or children who are convinced that they have an imaginary friend.

That doesn't mean that I'm happy about applying "hallucinations" to LLMs since it incorrectly implies that there's a consciousness in there that's misperceiving the external state of the world. And "bullshitting" also implies intent, if not to deceive, then to enhance status or impress.

Another term is required. Blathering is the best I can do but I'm hoping that someone else can do better.

@ct_bergstrom
Perhaps Hanlon's razor should be applied here.

As you mentioned, "hallucinate" is an established AI term. Engineers inexperienced in public communications use the jargon when speaking to reporters and the reporters parrot it back in their headlines without understanding the context. This is not unusual - another example that comes to mind is "heritability" in biology.

So, at least for now, I'll give them the benefit of the doubt.

Depression assessment instrument 

@ct_bergstrom Interesting - now it refuses to play. Maybe ChatGPT's Mom is monitoring Mastodon.

@ct_bergstrom Meanwhile, it looks like that a right wing organization can train a GPT-3 model to spew out right wing dogma for only $300. Truly scary stuff. twitter.com/davidrozado/status

Joe boosted
Joe boosted

So, the National Weather Service alone launches a thousand balloons a WEEK toward the stratosphere. Someone better tell the Air Force. nytimes.com/2023/02/14/science

@Riedl Panic at falling behind overcoming sensible consideration of reputational risk from bad LLM summaries. Right out of the Elon school of management.

@ct_bergstrom @moultano Ugh, not small LLM - small transformer based language model.

@ct_bergstrom @moultano I'm starting to have doubts about the idea tha LLMs are "stochastic parrots" that can't generalize after watching a short talk from Francois Charton of Meta at NeurIPS 2022.

TL;DR - he trained a small LLM to learn how to diagonalize matrices using only triplets of the similarity transform. No hallucinations were observed.

The talk was "Leveraging Maths to Understand Transformers" neurips.cc/virtual/2022/worksh

Joe boosted

Hi #epitwitter #epiverse, how many #COVID deaths are we willing to accept per year? As the third year of the #pandemic comes to a close, 190,000 Americans will have died of COVID by Feb 28 2023. That's more than for all other infectious disease combined. 1/

Joe boosted

@ct_bergstrom Great shots but hope you have insulated waders.

Joe boosted

Wharton professor Ethan Mollick has an interesting piece on how easy it was for him to create an entirely-synthetic video of him lecturing, using AI text, voice, and video generation tools.

oneusefulthing.substack.com/p/

Joe boosted

Men’s Journal (which is apparently a real thing published alongside Sports Illustrated) published an AI-generated article that contained 18 errors and unsubstantiated claims passed on as facts.

The article was on low testosterone, of all things, which is a topic already rife with misinformation

futurism.com/neoscope/magazine

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.