Microsoft is really amping up the GPT AGI hype with some truly terrible papers. One recent paper ("Sparks of Artificial General Intelligence:
Early experiments with GPT-4" h/t @ct_bergstrom) has examples of what they consider to be evidence of "commonsense reasoning". Let's take a look! 1/
@ct_bergstrom This, of course, is a very old riddle where the answer depends on understanding how to avoid predator/prey combinations. One question is: did GPT4 reason about this or did it memorize the answer because it saw it during training? 3/
@ct_bergstrom I think the answer is clear. If you ask GPT4 how it arrived at the correct answer, it happily tells you that it's already familiar with the puzzle. 4/
@ct_bergstrom And if you just switch it up a bit (substitute cow for fox) it gives an incorrect answer (since it leaves the cow alone with the corn). There are other examples of this you can discover for yourself if you plan with the examples in the appendix.
@ct_bergstrom Here's another alleged example of common sense reasoning that fails if it just tweak it a bit. Shot:
@ExcelAnalytics No, ChatGPT uses GPT-3.5. You (currently) have to pay $20/month to OpenAI for ChatGPT Plus which will give you GPT-4 access.