These are public posts tagged with #chatbots. You can interact with them if you have an account anywhere in the fediverse.
"AI – the ultimate bullshit machine – can produce a better 5PE than any student can, because the point of the 5PE isn't to be intellectually curious or rigorous, it's to produce a standardized output that can be analyzed using a standardized rubric.
I've been writing YA novels and doing school visits for long enough to cement my understanding that kids are actually pretty darned clever. They don't graduate from high school thinking that their mastery of the 5PE is in any way good or useful, or that they're learning about literature by making five marginal observations per page when they read a book.
Given all this, why wouldn't you ask an AI to do your homework? That homework is already the revenge of Goodhart's Law, a target that has ruined its metric. Your homework performance says nothing useful about your mastery of the subject, so why not let the AI write it. Hell, if you're a smart, motivated kid, then letting the AI write your bullshit 5PEs might give you time to write something good.
Teachers aren't to blame here. They have to teach to the test, or they will fail their students (literally, because they will have to assign a failing grade to them, and figuratively, because a student who gets a failing grade will face all kinds of punishments). Teachers' unions – who consistently fight against standardization and in favor of their members discretion to practice their educational skills based on kids' individual needs – are the best hope we have:"
https://pluralistic.net/2025/08/11/five-paragraph-essay/#targets-r-us
#AI #GenerativeAI #LLMs #Chatbots #Schools #Education #Grading #GoodhartsLaw
Is there an article on the vagueness in estimating the #water usage of #AI #chatbots?
Washington Post, Sept'24:
> A bottle of water per email: the hidden environmental costs of using AI chatbots¹
Sean Goedecke, Oct'24:
> Talking to ChatGPT costs 5ml of water, not 500ml²
#SamAltman, June'25:
> [T]he average #ChatGPT query uses… about 0.000085 gallons of #water; roughly 1/15 of a teaspoon.³
Christian Bonawandt, Aug'25:
> ChatGPT consumes between 10 to 100 milliliters for every prompt⁴
“I do wonder what that does if you have this sycophantic, compliant [bot] who never disagrees with you, [is] never bored, never tired, always happy to endlessly listen to your problems, always subservient, [and] cannot refuse consent. What does that do to the way we interact with other humans, especially for a new generation of people who are going to be socialised with this technology?”—Dr Raphaël Millière
AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn >
#chatbots #therapy #mental_wellbeing #praise #health #mental_health #moral_panic #cognitive_vulnerability #insanity #compliance #AI #LLM #interactions #non_verbal #communication #technology #news #tech #psychosis #neurotic_behaviours #artificial_intelligence #echo_chamber #intuition #scepticism #inadequate_substitute
Users may be led down conspiracy theory rabbit holes…
The Guardianhttps://www.europesays.com/2321440/ OpenAI Scrambles to Update GPT-5 After Users Revolt #algorithms #america #Chatbots #ChatGPT #MentalHealth #OpenAI #SamAltman #UnitedStates #UnitedStatesOfAmerica #US #USNews #USA #USANews
If this case fails, we are at the beginning of the end. Sorry, but I’m dead serious!
In a court case where a chatbot encouraged a 14-boy to kill himself, Character.ai claims in their defence that the output of their algorithmic text generator is free speech protected by the 1st amendment of the US constitution & therefore not liable.
If #chatbots get protection of #freespeech, it’s over. We’d take it up the bot if we have an issue?
Ridiculous and makes me sick!
https://open.spotify.com/episode/6rHAIEbWBvvv7bnlA2dxaM
#ai
Your Undivided Attention · Episode
Spotifyhttps://www.europesays.com/2319333/ A Kentucky Town Experimented With AI. The Results Were Stunning #AI #ArtificialIntelligence #Chatbots #ChatGPT #Government #Politics
https://www.europesays.com/uk/333982/ The World Will Enter a 15-Year AI Dystopia in 2027, Former Google Exec Says #AI #ArtificialIntelligence #chatbots #Google #GoogleGemini #Technology #UK #UnitedKingdom
https://www.europesays.com/2318527/ The World Will Enter a 15-Year AI Dystopia in 2027, Former Google Exec Says #AI #ArtificialIntelligence #Chatbots #google #GoogleGemini
The world is hurtling towards an inevitable AI dystopia…
EUROPE SAYS"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
#AI #GenerativeAI #ChatGPT #Delusions #MentalHealth #Hallucinations #Chatbots
Over 21 days of talking with ChatGPT, an otherwise…
The New York Times"To borrow some technical terminology from the philosopher Harry Frankfurt, “ChatGPT is bullshit.” Paraphrasing Frankfurt, a liar cares about the truth and wants you to believe something different, whereas the bullshitter is utterly indifferent to the truth. Donald Trump is a bullshitter, as is Elon Musk when he makes grandiose claims about the future capabilities of his products. And so is Sam Altman and the LLMs that power OpenAI’s chatbots. Again, all ChatGPT ever does is hallucinate — it’s just that sometimes these hallucinations happen to be accurate, though often they aren’t. (Hence, you should never, ever trust anything that ChatGPT tells you!)
My view, which I’ll elaborate in subsequent articles, is that LLMs aren’t the right architecture to get us to AGI, whatever the hell “AGI” means. (No one can agree on a definition — not even OpenAI in its own publications.) There’s still no good solution for the lingering problem of hallucinations, and the release of GPT-5 may very well hurt OpenAI’s reputation."
https://www.realtimetechpocalypse.com/p/gpt-5-is-by-far-the-best-ai-system
The long-hyped AI model is making some people who bought…
Realtime Techpocalypse Newsletterhttps://www.europesays.com/2316470/ Elon Musk Turns His AI Chatbot Into a Male Fantasy Engine #Chatbots #ElonMusk #Grok #Musk #SocialMedia
Sometimes humans are just too stupid and in those cases no chatbot in the world can help you... :-D
"A man gave himself bromism, a psychiatric disorder that has not been common for many decades, after asking ChatGPT for advice and accidentally poisoning himself, according to a case study published this week in the Annals of Internal Medicine.
In this case, a man showed up in an ER experiencing auditory and visual hallucinations and claiming that his neighbor was poisoning him. After attempting to escape and being treated for dehydration with fluids and electrolytes, the study reports, he was able to explain that he had put himself on a super-restrictive diet in which he attempted to completely eliminate salt. He had been replacing all the salt in his food with sodium bromide, a controlled substance that is often used as a dog anticonvulsant.
He said that this was based on information gathered from ChatGPT.
“After reading about the negative effects that sodium chloride, or table salt, has on one's health, he was surprised that he could only find literature related to reducing sodium from one's diet. Inspired by his history of studying nutrition in college, he decided to conduct a personal experiment to eliminate chloride from his diet,” the case study reads. “For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT, in which he had read that chloride can be swapped with bromide, though likely for other purposes, such as cleaning.”"
"For 3 months, he had replaced sodium chloride with…
404 Media"The core problem is that when people hear a new term they don’t spend any effort at all seeking for the original definition... they take a guess. If there’s an obvious (to them) definiton for the term they’ll jump straight to that and assume that’s what it means.
I thought prompt injection would be obvious—it’s named after SQL injection because it’s the same root problem, concatenating strings together.
It turns out not everyone is familiar with SQL injection, and so the obvious meaning to them was “when you inject a bad prompt into a chatbot”.
That’s not prompt injection, that’s jailbreaking. I wrote a post outlining the differences between the two. Nobody read that either.
The lethal trifecta Access to Private Data Ability to Externally Communicate Exposure to Untrusted Content
I should have learned not to bother trying to coin new terms.
... but I didn’t learn that lesson, so I’m trying again. This time I’ve coined the term the lethal trifecta.
I’m hoping this one will work better because it doesn’t have an obvious definition! If you hear this the unanswered question is “OK, but what are the three things?”—I’m hoping this will inspire people to run a search and find my description.""
https://simonwillison.net/2025/Aug/9/bay-area-ai/
#CyberSecurity #AI #GenerativeAI #LLMs #PromptInjection #LethalTrifecta #MCPs #AISafety #Chatbots
I gave a talk on Wednesday at the Bay Area AI Security…
Simon Willison’s WeblogIt's incredible that people can feed up to one million tokens (1 000 000) to LLMs and yet they still most of the time fail to take advantage of that enormous context window. No wonder people say that the output generated by LLMs is always crap... I mean, they're not great but at least they can manage to do a pretty good job - that is, only IF you teach them well... Beyond that, everyone has their own effort + time / results ratio.
"Engineers are finding out that writing, that long shunned soft skill, is now key to their efforts. In Claude Code: Best Practices for Agentic Coding, one of the key steps is creating a CLAUDE.md file that contains instructions and guidelines on how to develop the project, like which commands to run. But that’s only the beginning. Folks now suggest maintaining elaborate context folders.
A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone. Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.
(...)
Tech writers become context writers when they put on the art gallery curator hat, eager to show visitors the way and help them understand what they’re seeing. It’s yet another hat, but that’s both the curse and the blessing of our craft: like bards in DnD, we’re the jacks of all trades that save the day (and the campaign)."
https://passo.uno/from-tech-writers-to-ai-context-curators/
#AI #GenerativeAI #LLMs #Chatbots #PromptEngineering #ContextWindows #TechnicalWriting #Programming #SoftwareDevelopment #DocsAsDevelopment
I’ve been noticing a trend among developers that use…
passo.unohttps://www.europesays.com/uk/329556/ Degrees used to open doors—now even grads with master’s degrees are sending 60 job applications a month to no luck #Applications #ArtificialIntelligence #bosses #Business #Careers #chatbots #coding #CollegesAndUniversities #ComputerScience #engineering #EntryLevel #GenZ #GraduateSchool #hiring #HumanResources #JobHunting #JobSeekers #jobs #Management #managers #MarkZuckerberg #meta #millennials #retention #Students #TalentAcquisition #UK #unemployment #UnitedKingdom
"I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video and my disclosures) and have been using GPT-5 as my daily-driver. It’s my new favorite model. It’s still an LLM—it’s not a dramatic departure from what we’ve had before—but it rarely screws up and generally feels competent or occasionally impressive at the kinds of things I like to use models for.
I’ve collected a lot of notes over the past two weeks, so I’ve decided to break them up into a series of posts. This first one will cover key characteristics of the models, how they are priced and what we can learn from the GPT-5 system card."
I’ve had preview access to the new GPT-5 model family…
Simon Willison’s Weblog