@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl Representative example, which I did *today*, so the “you’re using old tech!” excuse doesn’t hold up.
I asked ChatGPT.com to calculate the mass of one curie (i.e., the amount producing a specific number of radioactive decays per second) of the commonly used radioactive isotope cobalt-60.
It produced some nicely formatted calculations that, in the end, appear to be correct. ChatGPT came up with 0.884 mg, the same as Wikipedia’s 884 micrograms on its page for the curie unit.
It offered to do the same thing for another isotope.
I chose cobalt-14.
This doesn’t exist. And not because it’s really unstable and decays fast. It literally can’t exist. The atomic number of cobalt is 27, so all its isotopes, stable or otherwise, must have a higher mass number. Anything with a mass number of 14 *is not cobalt*.
I was mimicking a possible Gen Chem mixup: a student who confused carbon-14 (a well known and scientifically important isotope) with cobalt-whatever. The sort of mistake people see (and make!) at that level all the time. Symbol C vs. Co. Very typical Gen Chem sort of confusion.
A chemistry teacher at any level would catch this, and explain what happened. Wikipedia doesn’t show cobalt-14 in its list of cobalt isotopes (it only lists ones that actually exist), so going there would also reveal the mistake.
ChatGPT? It just makes shit up. Invents a half-life (for an isotope, just to remind you, *cannot exist*), and carries on like nothing strange has happened.
This is, quite literally, one of the worst possible responses to a request like this, and yet I see responses like this *all the freaking time*.
Italy's Meloni doesn't seem to care much. She'd be giving away some Italian researchers if she could find a way to do it.
#USpolitics #PopCulture #movies #film #dystopia
I never saw/read this Oct 2016 Cracked article on the looming dystopia...
(and I'm not sure it would have convinced me, but... hindsight etc. etc.)
(I mean, I thought the Tea Party had been a problem, and so it was. But only because it was a precursor)
https://www.cracked.com/blog/6-reasons-trumps-rise-that-no-one-talks-about
Earlier: Everyone I know has a relationship with #guns. They’re ingrained in American culture—our movies, books, and politics. ... Over the last few decades, however, the moral weight and awful responsibility of these weapons has grown heavier. https://www.texasobserver.org/son-of-a-gun-legacy-unwanted-firearms/
Today's scoop: xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs
An employee at Elon Musk's artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk's companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.
GitGuardian's Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 distinct data sets.
"The credentials can be used to access the X.ai API with the identity of the user," GitGuardian wrote in an email explaining their findings to xAI. "The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04)."
Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago -- on March 2. But as of April 30, when GitGuardian directly alerted xAI's security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.
Read more: https://krebsonsecurity.com/2025/05/xai-dev-leaks-api-key-for-private-spacex-tesla-llms/
Use this 5-word response to shut down rude behavior—it’s like ‘holding up a mirror,’ says public speaking expert
Keeping Students Engaged During Long Class Periods https://www.edutopia.org/article/breaking-up-long-class-periods-maintain-students-focus?utm_content=linkpos4&utm_campaign=weekly-2025-04-30&utm_medium=email&utm_source=edu-newsletter
i found these slides for a talk i gave in 2021 (when i had a very different research focus) and god damn i really did try to write an entire academic book in the space of one talk huh https://docs.google.com/presentation/d/1xYo_Eb0bwH6wlBUrlJpWoHFbTH0nSeyDRF-BEIVGg8Y/edit?usp=sharing
Great news! Our paper submission deadline was extended
D-SAIL focuses on transformative curriculum design through the lens of data science, AI, and sustainable innovation in education, governance and law
Topics include:
- Innovative Pedagogical Frameworks
- AI & Data Analytics for Adaptive Learning
- Sustainability & Digitalisation in Education
- Human-AI Collaboration in Learning Design
- Interdisciplinary Curriculum Integration
- Ethical & Societal Implications in EdTech & Policy
1/2
AI doomers are also to blame for a historic labor shortage
"Nobel Prize winner Geoffrey Hinton said that machine learning would outperform radiologists within five years. That was eight years ago. Now, thanks in part to doomers, we’re facing a historic labor shortage."
https://newrepublic.com/article/187203/ai-radiology-geoffrey-hinton-nobel-prediction
We are happy to announce that as part of the project #MetaLing we are also inviting Francesco Periti from #kuleuven. He will tell us about his work on #semanticChange with #LLMs. The event is taking place online tomorrow at 14:30 CEST.
https://dllcm.unimi.it/it/modeling-semantic-change-through-large-language-models
@rao2z.bsky.social presents an extremely interesting evaluation of LLMs' ability to reason. His team had been doing this research for a while now, but now with the emergence of Large Reasoning Models, finally there is some notable progress
His post on bsky: https://bsky.app/profile/rao2z.bsky.social/post/3lmplm3ogkk2l
The preprint: https://arxiv.org/abs/2504.09762
Bloem has seen it before — the same pattern playing out in slow motion. “Asbestos,” he says “Lead in gasoline. Tobacco. Every time, we acted decades after the damage was done.” The science existed. The evidence had accumulated. But the decision to intervene always lagged. “It’s not that we don’t know enough,” he adds. “It’s that the system is not built to listen when the answers are inconvenient.”
https://www.politico.eu/article/bas-bloem-parkinsons-pesticides-mptp-glyphosate-paraquat/
...and this is how German authorities address the problem:
https://www.aljazeera.com/news/2025/4/14/germany-orders-deportation-of-pro-palestine-activists-what-you-should-know
If you don't see the violation in the article, that's right. They are worried about graffiti and a verbal offence committed in an attempt to protest the explicitly declared refusal to acknowledge "the war crime of starvation as a method of warfare; and the crimes against humanity of murder, persecution, and other inhumane acts" (as per ICC).
This is what so many European governments are failing to condemn. At this stage it becomes complicity. https://www.nbcnews.com/news/world/children-gaza-hunger-aid-blockade-israel-rcna201085
#Gaza #Palestine #Israel #WarCrimes
Microsoft are so desperate to find an use case for copilot, that they are pushing hard for a new prevasive "service" nobody wants in order to achieve a full-scale mass surveillance.
If you are still using Windows (or any other Microsoft product), remember it's never too late to switch.
https://arstechnica.com/security/2025/04/microsoft-is-putting-privacy-endangering-recall-back-into-windows-11/
Studying how people interact, in the past (#CulturalAnalytics) and today (#EdTech #Crowdsourcing). Researcher at @IslabUnimi, University of Milan. Bulgarian activist for legal reform with @pravosadiezv. I use dedicated accounts for different languages.
My profile is searchable with https://www.tootfinder.ch/