Follow

**”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews**

"_The estimated prevalence of user-reported LLM hallucinations (RQ1) at 1.75% of AI-error-related reviews, while seemingly modest, represents a high-impact, low-frequency type of error that significantly erodes user trust. For product managers and QA leads, this signals that while hallucinations may not be the most common complaint, their presence is a critical indicator of deep model failure._"

Massenon, R., Gambo, I., Khan, J.A. et al. ”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews. Sci Rep 15, 30397 (2025). doi.org/10.1038/s41598-025-154.

· Edited · · 0 · 5 · 0
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.