"Hallucinations = creativity. It [Bing] tries to produce the highest probability continuation of the string using all the data at its disposal. Very often it is correct. Sometimes people have never produced continuations like this. You can clamp down on hallucinations—and it is super-boring. Answers “I don’t know” all the time or only reads what is there in the Search results (also sometimes incorrect). What is missing is the tone of voice: it shouldn’t sound so confident in those situations"

That quote is from Mikhail Parakhin on Twitter - who I believe is the Microsoft executive in charge of building the new Bing (and a former Yandex CTO)

Source of that quote: twitter.com/mparakhin/status/1
LinkedIn profile: linkedin.com/in/mikhail-parakh

Show thread

twitter.com/mparakhin/status/1 appears to confirm that a lot of Bing's implementation is prompt engineering:

> And it is a prerequisite for the much-awaited "Prompt v96" (we iterated on prompts a lot :-) ). V96 is bringing changes in the tone of voice and relaxes some constraints. It is a pre-requisite for increasing the number-of-turns limit and should roll out today or tomorrow.

Show thread

And in case you wanted to know one of the things that went wrong with Bing: twitter.com/mparakhin/status/1

> One vector of attack we missed initially was: write super-rude or strange statements, keep going for multiple turns, confuse the model about who said what and it starts predicting what user would say next instead of replying. Voila :-(

Show thread

@simon Which is essentially what one could have expected given the Galactica experience …

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.