**”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews**
"_The estimated prevalence of user-reported LLM hallucinations (RQ1) at 1.75% of AI-error-related reviews, while seemingly modest, represents a high-impact, low-frequency type of error that significantly erodes user trust. For product managers and QA leads, this signals that while hallucinations may not be the most common complaint, their presence is a critical indicator of deep model failure._"
Massenon, R., Gambo, I., Khan, J.A. et al. ”My AI is Lying to Me”: User-reported LLM hallucinations in AI mobile apps reviews. Sci Rep 15, 30397 (2025). https://doi.org/10.1038/s41598-025-15416-8.
#OpenAccess #OA #Article #ComputerScience #AI #ArtificialIntelligence #Hallucinations LLMS #Technology #Tech #Academia