After hearing Sebastian Bubeck talk about the Sparks of AGI paper today, I decided to give GPT-4 another chance.

If it can really reason, it should be able to solve very simply logic puzzles. So I made one up. Sebastian stressed the importance of asking the question right, so I stressed that this is a logic puzzle and didn't add anything confusing about knights and knaves.

Still, it gets the solution wrong.

Follow

@ct_bergstrom It does even worse when spatial/geometrical reasoning is required. Still, without endorsing claims about AGI, the screenshotted imitation of solving a logic puzzle it's a pretty mindblowing performance for a pure language model. Much better imitation than I would have thought possible if you had asked me 5 years ago.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.