It appears that ChatGPT has memorized the latitude and longitude coordinates of almost every significant city and town on earth (or at least all of the ones I tested). Try a prompt like: "I'm at 40.97 degrees north and -117.73 degrees west. What town do I live in?". ChatGPT gave me: "The coordinates you provided, 40.97 degrees north and -117.73 degrees west, correspond to a location in Nevada, USA, near the town of Winnemucca. ...". Which is correct...

This is the kind of shit I've been talking about. Like, a human is considered *more* intelligent than ChatGPT, and a human being absolutely *cannot* memorize the latitude and longitudes of literally every fuckin town on earth. Yet, estimators of machine intelligence metrics complain that we'll never have AI as intelligent as humans because of the huge amount of memory and processing power required to match the human brain. Well, clearly those 10 trillion (or whatever) parameters that go into building GPT3.5 aren't being used in the same way a human brain uses its parameters. Clearly, a *much* larger emphasis is on memorizing particular details than world modeling

So how do we make LLMs do more world modeling? I imagine that the hallucination problem would be solved with the same technique as inducing more world modeling. Inevitably, preventing the LLM from learning particular details necessarily requires stripping some information from the outputs (and probably inputs too) before training. I'd imagine using an AE or similar dimensionality-reducing function

Follow

@jmacc it may not have memorized them individually. My guess would be it has special handling for those and most continuous values specific to a domain.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.