Show newer

@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl Representative example, which I did *today*, so the “you’re using old tech!” excuse doesn’t hold up.

I asked ChatGPT.com to calculate the mass of one curie (i.e., the amount producing a specific number of radioactive decays per second) of the commonly used radioactive isotope cobalt-60.

It produced some nicely formatted calculations that, in the end, appear to be correct. ChatGPT came up with 0.884 mg, the same as Wikipedia’s 884 micrograms on its page for the curie unit.

It offered to do the same thing for another isotope.

I chose cobalt-14.

This doesn’t exist. And not because it’s really unstable and decays fast. It literally can’t exist. The atomic number of cobalt is 27, so all its isotopes, stable or otherwise, must have a higher mass number. Anything with a mass number of 14 *is not cobalt*.

I was mimicking a possible Gen Chem mixup: a student who confused carbon-14 (a well known and scientifically important isotope) with cobalt-whatever. The sort of mistake people see (and make!) at that level all the time. Symbol C vs. Co. Very typical Gen Chem sort of confusion.

A chemistry teacher at any level would catch this, and explain what happened. Wikipedia doesn’t show cobalt-14 in its list of cobalt isotopes (it only lists ones that actually exist), so going there would also reveal the mistake.

ChatGPT? It just makes shit up. Invents a half-life (for an isotope, just to remind you, *cannot exist*), and carries on like nothing strange has happened.

This is, quite literally, one of the worst possible responses to a request like this, and yet I see responses like this *all the freaking time*.

I know there is a lot for academics to study about the actual use and development of LLMs, but I need sociologists and historians of science to be writing books on the style and speed of research communication in this area. New vocabulary every day! Citing blog posts! Reviews of review articles!

Italy's Meloni doesn't seem to care much. She'd be giving away some Italian researchers if she could find a way to do it.

David Ho  
Yeah, US researchers earn at least 3x more than those in France."French researchers have regularly raised the issue of the comparatively low salar...

#USpolitics #PopCulture #movies #film #dystopia

I never saw/read this Oct 2016 Cracked article on the looming dystopia...

(and I'm not sure it would have convinced me, but... hindsight etc. etc.)

(I mean, I thought the Tea Party had been a problem, and so it was. But only because it was a precursor)

cracked.com/blog/6-reasons-tru

Suggestion: if your support process starts with telling your customers they shouldn't - and thus cannot - trust your answers, you should, JUST PERHAPS, reevaluate.

#Miles #AI #Chatbot #wtf

Earlier: Everyone I know has a relationship with #guns. They’re ingrained in American culture—our movies, books, and politics. ... Over the last few decades, however, the moral weight and awful responsibility of these weapons has grown heavier. texasobserver.org/son-of-a-gun

#GunViolence #culture #Texas #ethics

Today's scoop: xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk's artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk's companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

GitGuardian's Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 distinct data sets.

"The credentials can be used to access the X.ai API with the identity of the user," GitGuardian wrote in an email explaining their findings to xAI. "The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04)."

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago -- on March 2. But as of April 30, when GitGuardian directly alerted xAI's security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

Read more: krebsonsecurity.com/2025/05/xa

i found these slides for a talk i gave in 2021 (when i had a very different research focus) and god damn i really did try to write an entire academic book in the space of one talk huh docs.google.com/presentation/d

Great news! Our paper submission deadline was extended

D-SAIL focuses on transformative curriculum design through the lens of data science, AI, and sustainable innovation in education, governance and law

Topics include:
- Innovative Pedagogical Frameworks
- AI & Data Analytics for Adaptive Learning
- Sustainability & Digitalisation in Education
- Human-AI Collaboration in Learning Design
- Interdisciplinary Curriculum Integration
- Ethical & Societal Implications in EdTech & Policy

1/2

AI doomers are also to blame for a historic labor shortage

"Nobel Prize winner Geoffrey Hinton said that machine learning would outperform radiologists within five years. That was eight years ago. Now, thanks in part to doomers, we’re facing a historic labor shortage."
newrepublic.com/article/187203

@techtakes

We are happy to announce that as part of the project #MetaLing we are also inviting Francesco Periti from #kuleuven. He will tell us about his work on #semanticChange with #LLMs. The event is taking place online tomorrow at 14:30 CEST.
dllcm.unimi.it/it/modeling-sem

S for Stop
I for Investigate the source
F for Find better coverage
T for Trace the claim to its original context

@rao2z.bsky.social presents an extremely interesting evaluation of LLMs' ability to reason. His team had been doing this research for a while now, but now with the emergence of Large Reasoning Models, finally there is some notable progress

His post on bsky: bsky.app/profile/rao2z.bsky.so
The preprint: arxiv.org/abs/2504.09762

Bloem has seen it before — the same pattern playing out in slow motion. “Asbestos,” he says “Lead in gasoline. Tobacco. Every time, we acted decades after the damage was done.” The science existed. The evidence had accumulated. But the decision to intervene always lagged. “It’s not that we don’t know enough,” he adds. “It’s that the system is not built to listen when the answers are inconvenient.”

politico.eu/article/bas-bloem-

...and this is how German authorities address the problem:
aljazeera.com/news/2025/4/14/g

If you don't see the violation in the article, that's right. They are worried about graffiti and a verbal offence committed in an attempt to protest the explicitly declared refusal to acknowledge "the war crime of starvation as a method of warfare; and the crimes against humanity of murder, persecution, and other inhumane acts" (as per ICC).

Show thread

Microsoft are so desperate to find an use case for copilot, that they are pushing hard for a new prevasive "service" nobody wants in order to achieve a full-scale mass surveillance.

If you are still using Windows (or any other Microsoft product), remember it's never too late to switch.
arstechnica.com/security/2025/

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.