One of the things that makes #LLM #Chatbots seem like they can carry on a conversation is that they adjust their state according to the conversation as it evolves, and respond in light of previous comments from both participants.

This may be good for a conversation, but naively implemented, it's quite a bad thing for a search tool / knowledge engine.

Here's an example. Here one answer Bard gives when you ask it what I think of the InfoMap algorithm that I co-developed with Martin Rosvall.

Set aside the quotes that it incorrectly attributes to me in a blog post I didn't write.

What is far more interesting is that here is polar opposite answer to exactly the same query.

FWIW, I never disavowed InfoMap and thinks it works quite well, though I'm perhaps not as boastful as it suggested in the previous answer.

Show thread
Follow

@ct_bergstrom Oh, Bard, I see. I saw that someone had used similar tricks to get GPT-3.5 to invent a detailed story about Gerald Ford shooting a cabbage on the White House lawn.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.