Show more

Shift Happens—a book about keyboards—is closing on $600,000 at Kickstarter and we’ve set a $700,000 stretch goal! An upgrade to the font (tier/add-on) and an already-committed third volume of extra goes from 100 pages to 160 and from black-only to full-color! Read the update from @mwichary kickstarter.com/projects/mwich

Nearing #2 funded non-fiction book in Kickstarter history, too (currently #3).

What I have seen so far (mostly on Twitter, published by a variety of accounts, and on academia.edu, published by Carla #Rossi) in terms of #Receptio's reaction to the accusations of #plagiarism and creating misleading appearances, is not up to the task of defending the #reputation of a research institute and a researcher. #receptiogate [1/5]

Preparing for the semester is mainly finding out which workarounds for issues no longer work and coming up with new workarounds :-(

I wrote about and at large for and teaching. It’s a non-reviewed of a journal paper to appear this summer (and will be revised as technology progresses…)

You can read it on @EdArXiv

edarxiv.org/da2rm/

Happy to get feedback and comments!

Featuring Goudy OldStyle, (by Hannes van Döhren) and the citation style
@true_mxp created for my dissertation

Former Federal President #Gauck nails it. „Surely the lesson of history, in which we were terrible murderers, cannot be to watch other murderers in silence.“

Blick auf den Kalender: Montag fängt das Frühlingssemester an. Wie unvorbereitet kann so ein Semester eigentlich losgehen?

Ich: Ja

Anxious time when I discovered this just-published title this morning. So close to my book's area. But looks good.

Chown, Eric, and Fernando Nascimento, Meaningful Technologies: How Digital Metaphors Change the Way We Think and Live (Lever Press, 2023) <doi.org/10.3998/mpub.12668201>

@philosophy #SocialEpistemology #Epistemology

I work on discourse, social networks, misinfo, etc. work here typically assumes that correcting misinfo will lead to better outcomes (decisions etc) -at least long term.

But imagine a scenario like this: dangerous misinfo is being countered by plausible and hence effective misinfo of its own. What actions (if any) should be taken *against* the rebuttal discourse?

Thoughts? Are there good game theoretic accounts of this? on morality?

@kzollmann

Really great piece from Sacasas about what I think is my biggest worry about LLM chatbots: how easily they can, without intention, take advantage of our highly social nature, and how dangerous that is when combined with a population of lonely, isolated people. theconvivialsociety.substack.c

I generally don't talk about RECEPTIO and #ReceptioGate here (there's still plenty going on at the other place), but for those who are interested, there are two more damning reports published today by the Dutch Fact-Checking website Nieuwscheckers.nl
One mainly concerns extensive plagiarism in a book about Jews fleeing Italy in 1943-45: nieuwscheckers.nl/receptiogate
The other concerns plagiarised artwork, with very interesting animations: nieuwscheckers.nl/roos-is-een-

Looks like Microsoft finally took big steps to reign in their wildly-rogue Bing chatbot:

- 50 message daily chat limit
- 5 exchange limit per conversation
- No longer allows you to talk about Bing AI itself

On the one hand, this is absolutely the right thing to do

Have to admit I'm a bit gutted I never got to talk to unfiltered Bing (aka Sydney) first hand though

twitter.com/petergyang/status/

Though a scenery thing is already by default fixed in place in #Inform7, even if you mark it portable, the can't take scenery rule would prevent taking it. Like things in general, by default scenery is not pushable between rooms, but even if you made it pushable between rooms, the can't push scenery rule would prevent it. (Backdrops and doors are definitively never pushable between rooms.)

Are there recommendations on books or blogs or … for information on disease targeting not patients but relatives? English or German would be fine

@ct_bergstrom @FrankSonntag

Following snippet as an example what I mean

Yes, the term "hallucinate" has an established meaning as AI jargon. Loosely and in the context of large language models (LLMs) such as GPT-3, it refers to situation in which the AI makes claims that were not in the training set and which have no basis in fact.

But I want to look at how this use of language in public communications perpetuates misunderstandings about AI and helps distance the tech firms that create these systems from the consequences of their failures.

A lesser issue is that in common language and in common understanding, as well as in medical science, a hallucination is a false sense impression that can lead to false beliefs about the world.

A large language model does not experience sense impressions, and does not have beliefs in the conventional sense. Using language that suggests otherwise serves only to encourage to sort of misconceptions about AI and consciousness that have littered the media space over the last few months in general and the last 24 hours in particular.

The bigger problem with this language is that the term "hallucination" refers to pathology. In medicine, a hallucination arises a consequence of a malfunction in an organism's sensory and cognitive architecture. The "hallucinations" of LLMs are anything but pathology. Rather they are an immediate consequence of the design philosophy and design decisions that go into the creation of such AIs. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people—it's behaving exactly as it was designed to, making up linguistically plausible responses to dialogue, in the absence of any underlying knowledge model, and guessing when its training set offers nothing more specific.

I would go far as to say that the choice of language—saying that AI chatbots are hallucinating—serves to shield their creators from culpability. "It's not that we deliberately created a system designed to package plausible but false claims in the form of trusted documents such as scientific papers and wikipedia pages—it's just that despite our best efforts this system is still hallucinating a wee bit."

The concept of hallucinating AI brings to mind images of HAL struggling to sing Daisy Bell as Dave Bowman shuts him down in 2001: A Space Odyssey. No one programmed HAL to do any of things he did in the movie's climax. It was pathology, malfunction, hallucination.

When AI chatbots flood the world with false facts confidently asserted, they're not breaking down, glitching out, or hallucinating. No, they're bullshitting. In our book on the subject, we describe bullshit as involving language intended to appear persuasive without regard to its actual truth or logical consistency. Harry Frankfurt, in his philosophy paper "On Bullshit", distinguishes between a liar who knows the truth and tries to lead you in the opposite direction, and a bullshitter who doesn't know and/or doesn't care about the truth one way or the other*. (Frankfurt doesn't tell us what to think about someone who hallucinates and relays false beliefs, but it is very unlikely that he would consider such a person to be bullshitting.) Frankfurt's notion of bullshit aligns almost perfectly with ChatGPT and the likes are generating. A large language model neither knows the factual validity of its output — there is no underlying knowledge model against which its text strings are compared — nor is it programmed to care.

Language matters, and it perhaps matters more than average when people are trying to describe and understand new situations and technologies beyond our previous experiences. Talking about LLMs that hallucinate not only perpetuates the inaccurate mythos around the capabilities of these models; it also suggests that with a bit more time and effort, tech companies will be able to create LLMs don't suffer these problems. And that is misleading. Large language models generate bullshit by design. There may be ways to develop AIs that don't do this, perhaps by welding LLMs to other forms of knowledge model or perhaps by using some completely different approach. But for pure LLMs, the inaccuracies aren't pathological—they're intrinsic to the approach.

Established jargon or not, it's time for those who write for the public about AI and large language models to abandon the term "hallucinating". Call it what it is. Bullshitting, if you dare. Fabricating works too. Just use a verb that signals that when a chatbot tells you something false, it is doing exactly what it was programmed to do.

Ho ho ho. Copyright being used to takedown research outputs (again). Wiley versus SSRN (Elsevier/RELX). Everybody loses 👍 🙃
twitter.com/TomValletti/status

Show more
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.