Show newer

"Cardinal George of Chicago, of happy memory, was one of my great mentors, and he said: ‘Look, until America goes into political decline, there won’t be an American pope.’ And his point was, if America is kind of running the world politically, culturally, economically, they don’t want America running the world religiously. So, I think there’s some truth to that, that we’re such a superpower and so dominant, they don’t wanna give us, also, control over the church."
thebulwark.com/p/seven-things-

Ahead of #MothersDay, Buddhism offers a complex lens.

The Buddha saw family as an obstacle to enlightenment—yet he honored two mothers: Maya, who gave birth to him, and Mahaprajapati, the aunt who raised him and became the first Buddhist nun.

Their stories still shape Buddhist views on motherhood and gender.
theconversation.com/rememberin

@histodons #Histodons #Buddhism

Amazon publishes Generative AI Adoption Index and the results are something! And by “something” I mean “annoying”.

I don’t know how seriously I should take the numbers, because it’s Amazon after all and they want to make money with this crap, but on the other hand they surveyed “senior IT decision-makers”… and my opinion on that crowd isn’t the highest either.

Highlights:

Prioritizing spending on GenAI over spending on security. Yes, that is not going to cause problems at all. I do not see how this could go wrong.
The junk chart about “job roles with generative AI skills as a requirement”. What the fuck does that even mean, what is the skill? Do job interviews now include a section where you have to demonstrate promptfondling “skills”? (Also, the scale of the horizontal axis is wrong, but maybe no one noticed because they were so dazzled by the bars being suitcases for some reason.)
Cherry on top: one box to the left they list “limited understanding of generative AI skilling needs” as a barrier for “generative AI training”. So yeah…
“CAIO”. I hate that I just learned that.

AI is incapable of understanding reality from absolute made up bullshit.

Remember this next time you think trusting the top result from Google’s “AI summary” is an appropriate way of being a responsible member of society.

A report by IBM explains part of the hype around GenAI:

25% of AI initiatives have delivered expected ROI over the last few years

52% of CEOs say their organization is realizing value from generative AI investments beyond cost reduction

For 64% of CEOs the risk of falling behind drives investment in some technologies before they have a clear understanding of the value they bring, but only 37% prefer to be “fast and wrong” than “right and slow”

newsroom.ibm.com/2025-05-06-ib
@techtakes

A piece of history we cannot afford to forget… nor to allow be rewritten.

During the decade-long conflicts, the major powers dithered as Serb militias carried out their brutal campaigns of ethnic cleansing. Guardian reporters became more passionate and more outspoken in their condemnation, attracting praise and criticism

Just a confirmation that Tesla boycotts work. New Republic reports that Musk lost a quarter of his fortune with the stock price drop. the current price is being artificially pumped up at all costs, but this can't last for another quarter as sales are plummeting.
newrepublic.com/post/194568/el

@mzedp @necedema @Wyatt_H_Knott @futurebird @grammargirl Representative example, which I did *today*, so the “you’re using old tech!” excuse doesn’t hold up.

I asked ChatGPT.com to calculate the mass of one curie (i.e., the amount producing a specific number of radioactive decays per second) of the commonly used radioactive isotope cobalt-60.

It produced some nicely formatted calculations that, in the end, appear to be correct. ChatGPT came up with 0.884 mg, the same as Wikipedia’s 884 micrograms on its page for the curie unit.

It offered to do the same thing for another isotope.

I chose cobalt-14.

This doesn’t exist. And not because it’s really unstable and decays fast. It literally can’t exist. The atomic number of cobalt is 27, so all its isotopes, stable or otherwise, must have a higher mass number. Anything with a mass number of 14 *is not cobalt*.

I was mimicking a possible Gen Chem mixup: a student who confused carbon-14 (a well known and scientifically important isotope) with cobalt-whatever. The sort of mistake people see (and make!) at that level all the time. Symbol C vs. Co. Very typical Gen Chem sort of confusion.

A chemistry teacher at any level would catch this, and explain what happened. Wikipedia doesn’t show cobalt-14 in its list of cobalt isotopes (it only lists ones that actually exist), so going there would also reveal the mistake.

ChatGPT? It just makes shit up. Invents a half-life (for an isotope, just to remind you, *cannot exist*), and carries on like nothing strange has happened.

This is, quite literally, one of the worst possible responses to a request like this, and yet I see responses like this *all the freaking time*.

I know there is a lot for academics to study about the actual use and development of LLMs, but I need sociologists and historians of science to be writing books on the style and speed of research communication in this area. New vocabulary every day! Citing blog posts! Reviews of review articles!

Italy's Meloni doesn't seem to care much. She'd be giving away some Italian researchers if she could find a way to do it.

David Ho  
Yeah, US researchers earn at least 3x more than those in France."French researchers have regularly raised the issue of the comparatively low salar...

#USpolitics #PopCulture #movies #film #dystopia

I never saw/read this Oct 2016 Cracked article on the looming dystopia...

(and I'm not sure it would have convinced me, but... hindsight etc. etc.)

(I mean, I thought the Tea Party had been a problem, and so it was. But only because it was a precursor)

cracked.com/blog/6-reasons-tru

Suggestion: if your support process starts with telling your customers they shouldn't - and thus cannot - trust your answers, you should, JUST PERHAPS, reevaluate.

#Miles #AI #Chatbot #wtf

Earlier: Everyone I know has a relationship with #guns. They’re ingrained in American culture—our movies, books, and politics. ... Over the last few decades, however, the moral weight and awful responsibility of these weapons has grown heavier. texasobserver.org/son-of-a-gun

#GunViolence #culture #Texas #ethics

Today's scoop: xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs

An employee at Elon Musk's artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk's companies, including SpaceX, Tesla and Twitter/X, KrebsOnSecurity has learned.

GitGuardian's Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 distinct data sets.

"The credentials can be used to access the X.ai API with the identity of the user," GitGuardian wrote in an email explaining their findings to xAI. "The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04)."

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago -- on March 2. But as of April 30, when GitGuardian directly alerted xAI's security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

Read more: krebsonsecurity.com/2025/05/xa

i found these slides for a talk i gave in 2021 (when i had a very different research focus) and god damn i really did try to write an entire academic book in the space of one talk huh docs.google.com/presentation/d

Great news! Our paper submission deadline was extended

D-SAIL focuses on transformative curriculum design through the lens of data science, AI, and sustainable innovation in education, governance and law

Topics include:
- Innovative Pedagogical Frameworks
- AI & Data Analytics for Adaptive Learning
- Sustainability & Digitalisation in Education
- Human-AI Collaboration in Learning Design
- Interdisciplinary Curriculum Integration
- Ethical & Societal Implications in EdTech & Policy

1/2

AI doomers are also to blame for a historic labor shortage

"Nobel Prize winner Geoffrey Hinton said that machine learning would outperform radiologists within five years. That was eight years ago. Now, thanks in part to doomers, we’re facing a historic labor shortage."
newrepublic.com/article/187203

@techtakes

We are happy to announce that as part of the project #MetaLing we are also inviting Francesco Periti from #kuleuven. He will tell us about his work on #semanticChange with #LLMs. The event is taking place online tomorrow at 14:30 CEST.
dllcm.unimi.it/it/modeling-sem

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.