"Can AI Avoid the Enshittification Trap?"
No.
"recently vacationed in Italy...ran my itinerary past GPT5 for sightseeing suggestions & restaurant recommendations"
Embarrassing.
"When I got home, I asked the model how it chose that restaurant... The answer was complex and impressive"
No, the answer was a hallucination
An entire article based on hallucination by the editor-at-large of Wired by Steven Levy
Reporting on LLMs with a fundamental misunderstanding of how they work