(Former computer Go researcher here, cited in the original AlphaGo paper.)
A lot of people are saying that the recent human victory in Go highlights a fundamental flaw in things like ChatGPT, which are trained on human-generated data. There are a few problems with this claim:
1) While AlphaGo was initially trained on human data, its stronger descendants AlphaGo Zero and AlphaZero were not; they were simply given the rules and derived their strategies from self-play. (I don't know about KataGo and LeelaZero mentioned in the article.) I would be interested to know if this exploit works against such programs.
2) The exploit was discovered with a lot of computational power. While a human can carry out the plan, it's not like a human just directly used their "intuition" or "creativity" to out-think the computer.
3) Saying that ChatGPT has "weaknesses" is vastly overestimating ChatGPT. It doesn't do any reasoning or have an internal model of the world, flawed or otherwise. It is simply, as someone quipped, "spicy autocorrect".
@Pat I almost certainly saw it a long time ago, but I don't remember it.
@peterdrake
Here are a couple of short video reviews of the episode (2 min / 7 min each) that sum it up, although neither review fully conveys how absolutely arrogant the alien character is.
You probably want to watch the 7min video first, then the 2min video.
They contain spoilers, of course.
https://www.youtube.com/watch?v=2oi6R5FHf9U (6:44)
https://www.youtube.com/watch?v=ZIOEklxc_yY (2:27)