If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance.

Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

>>

But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access.

Setting things up so that you get "the answer" to your question cuts off the user's ability to do the sense-making that is critical to information literacy.

>>

That sense-making includes refining the question, understanding how different sources speak to the question, and locating each source within the information landscape.

>>

Imagine putting a medical query into a standard search engine and receiving a list of links including one to a local university medical center, one to WebMD, one to Dr. Oz, and one to an active forum for people with similar medical issues.

If you have the underlying links, you have the opportunity to evaluate the reliability and relevance of the information for your current query --- and also to build up your understanding of those sources over time.

>>

If instead you get an answer from a chatbot, even if it is correct, you lose the opportunity for that growth in information literacy.

The case of the discussion forum has a further twist: Any given piece of information there is probably one you'd want to verify from other sources, but the opportunity to connect with people going through similar medical journeys is priceless.

>>

Finally, the chatbots-as-search paradigm encourages us to just accept answers as given, especially when they are stated in terms that are both friendly and authoritative.

But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources.

The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Don't be that guy.

/fin

Follow

@emilymbender It is interesting to compare the various web-enabled chatbots' UIs in that regard. Perplexity highlights the underlying search results most prominently, though gemini also does a pretty good job. Chatgpt doesn't show any sources.

Of course with all of them you will find that they sometimes say things that aren't in the sources or even contradict them...

@spoltier @emilymbender Bing Chat lists 'sources', but they may not actually be sources.

For example, when I first tried it, I asked it what the difference was between CHERIoT (the project I run) and a PMP (what RISC-V calls an MPU). It gave a result that was a very light paraphrasing of something I'd written. Rather than citing this, it linked to Forbes and a few other places as citations. Every single one of the 'citations' was an article about Project Management Professionals.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.