You may have heard that the Intergovernmental Panel on Climate Change - the #IPCC - released a new #climate report.
Here’s the most important take home:
There are *still* so many feasible & effective opportunities to reduce greenhouse gas emissions & adapt to #climatechange available right now.
Climate solutions will improve #food, energy & water security, benefit global health, promote equity, conserve biodiversity & boost the economy.
Read more: https://www.ipcc.ch/
Following snippet as an example what I mean
Yes, the term "hallucinate" has an established meaning as AI jargon. Loosely and in the context of large language models (LLMs) such as GPT-3, it refers to situation in which the AI makes claims that were not in the training set and which have no basis in fact.
But I want to look at how this use of language in public communications perpetuates misunderstandings about AI and helps distance the tech firms that create these systems from the consequences of their failures.
A lesser issue is that in common language and in common understanding, as well as in medical science, a hallucination is a false sense impression that can lead to false beliefs about the world.
A large language model does not experience sense impressions, and does not have beliefs in the conventional sense. Using language that suggests otherwise serves only to encourage to sort of misconceptions about AI and consciousness that have littered the media space over the last few months in general and the last 24 hours in particular.
The bigger problem with this language is that the term "hallucination" refers to pathology. In medicine, a hallucination arises a consequence of a malfunction in an organism's sensory and cognitive architecture. The "hallucinations" of LLMs are anything but pathology. Rather they are an immediate consequence of the design philosophy and design decisions that go into the creation of such AIs. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people—it's behaving exactly as it was designed to, making up linguistically plausible responses to dialogue, in the absence of any underlying knowledge model, and guessing when its training set offers nothing more specific.
I would go far as to say that the choice of language—saying that AI chatbots are hallucinating—serves to shield their creators from culpability. "It's not that we deliberately created a system designed to package plausible but false claims in the form of trusted documents such as scientific papers and wikipedia pages—it's just that despite our best efforts this system is still hallucinating a wee bit."
The concept of hallucinating AI brings to mind images of HAL struggling to sing Daisy Bell as Dave Bowman shuts him down in 2001: A Space Odyssey. No one programmed HAL to do any of things he did in the movie's climax. It was pathology, malfunction, hallucination.
When AI chatbots flood the world with false facts confidently asserted, they're not breaking down, glitching out, or hallucinating. No, they're bullshitting. In our book on the subject, we describe bullshit as involving language intended to appear persuasive without regard to its actual truth or logical consistency. Harry Frankfurt, in his philosophy paper "On Bullshit", distinguishes between a liar who knows the truth and tries to lead you in the opposite direction, and a bullshitter who doesn't know and/or doesn't care about the truth one way or the other*. (Frankfurt doesn't tell us what to think about someone who hallucinates and relays false beliefs, but it is very unlikely that he would consider such a person to be bullshitting.) Frankfurt's notion of bullshit aligns almost perfectly with ChatGPT and the likes are generating. A large language model neither knows the factual validity of its output — there is no underlying knowledge model against which its text strings are compared — nor is it programmed to care.
Language matters, and it perhaps matters more than average when people are trying to describe and understand new situations and technologies beyond our previous experiences. Talking about LLMs that hallucinate not only perpetuates the inaccurate mythos around the capabilities of these models; it also suggests that with a bit more time and effort, tech companies will be able to create LLMs don't suffer these problems. And that is misleading. Large language models generate bullshit by design. There may be ways to develop AIs that don't do this, perhaps by welding LLMs to other forms of knowledge model or perhaps by using some completely different approach. But for pure LLMs, the inaccuracies aren't pathological—they're intrinsic to the approach.
Established jargon or not, it's time for those who write for the public about AI and large language models to abandon the term "hallucinating". Call it what it is. Bullshitting, if you dare. Fabricating works too. Just use a verb that signals that when a chatbot tells you something false, it is doing exactly what it was programmed to do.
Revealed: #Exxon made ‘breathtakingly’ accurate #climate predictions in 1970s and 80s | #ExxonMobil | #TheGuardian
"The oil giant Exxon privately predicted #GlobalWarming correctly and skillfully only to then spend decades publicly rubbishing such #science in order to protect its core business, new #research has found."
https://www.theguardian.com/business/2023/jan/12/exxon-climate-change-global-warming-research
This lighthouse is SO UNIQUE! It's called Þrídrangar, which means "three rock pillars". It is located 4.5 miles (7.2 kilometres) off the southwest coast of Iceland, in the archipelago of Vestmannaeyjar, often described as the most isolated lighthouse in the world. The lighthouse was built there in 1939.
It's such a INCREDIBLE location for a lighthouse, perched on a rock in Iceland's wild surf. Originally constructed and accessible only by scaling the rock on which it is situated, it is now accessible by helicopter since the construction of a helipad.
#photography #photo #photos #landscape #lighthouse #engineering #amazing #travel #world #wonders
"You're really asking the internet to name a probe going to Uranus?" https://futurism.com/probe-uranus-names-mistake
I cannot keep this to myself. There is a website (radio.garden) where you can listen to radio stations all over the world for free. No log in. No email address. Nothing.
When the site loads, you are looking at the globe. Slide the little white circle over the green dots (each green dot is a radio station) until you find one you like.
I have been listening to this station in the Netherlands and it absolutely slaps. I have no idea what they're saying but the music is fantastic.
On a causé dans le podcast ! De l'énorme et belle Intégrale Manchette-Tardi (du noir en BD, et du beau, avec des raretés), de Rochette (encore de la bonne BD), mais aussi de deux livres (sans images) de Julian Barnes. C'est par là...
https://www.cfmradio.fr/gueules-cassees
Look up tonight! All the planets in our solar system are visible (some with the help of a telescope). Check your favorite stargazing site for more info!
Of course, you can look at Hubble’s planet images anytime, like this one of Saturn! For more: https://go.nasa.gov/3vqF7TI
#Hubble
Oh gosh this is hilarious https://archiveofourown.org/works/43303747/
Alan Turing was a mathematician & cryptographer who was a leading code-breaker in the team that decrypted Nazi Germany’s Enigma machine during WWII. He inspired modern computing & what became artificial intelligence.
Instead of being hailed as a genius & hero, Turing was convicted as a homosexual & forced to endure chemical castration. He died by suicide at 41 in 1954.
The British government didn’t apologize until 2009 & Queen Elizabeth II finally pardoned him in 2013. #history #science
When Neil Gaiman @neilhimself asked me to write a story for a speculative fiction anthology a dozen years ago, “Human Intelligence” was the result. On my public radio show and podcast Studio 360, we produced an audio adaptation that ran annually for years, and here it is again. Enjoy! And Merry Christmas! https://www.wnyc.org/story/112121-kurt-andersens-human-intelligence-a-holiday-tale/
And this brings me to today's curious coda. Last week, we lost a silver ring around the house or yard.
As I mentioned earlier, my crows don't tend to bring gifts. But this morning as I went out to give them their snack, there was the ring, plain as day, right in the spot where I feed them.
Now it's entirely possible that we lost it there and didn't notice for a few days. But it seems odd given how many times I've passed that spot since. I have my guesses, but I'll let you make up your mind.
Some of you may have seen Elon Musk's endorsement of Robert F. Kennedy Jr.'s crazy antivax conspiracy theories today.
Over at post.news, I just posted a long-form piece about this, and about how science education needs to adapt to online disinformation.
Please a look. If you like it, boost it there or here or — if you dare — over on the birdsite.
Financial controller during the day, tired daddy of 2 little monsters 👻 otherwise, tireless dreamer during the few free time I have left.
I love #indiemusic and #SFFF, especially #fantastic.
Je suis belge et le français est ma langue maternelle.