Hm. I thought what Gary Kasparov wrote in his 2018 editorial in Science was worth pondering on. He gave high praise to AlphaZero, commending its ability to trade material for activity and concluded: "Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth." (Science 362(6419): 1087 ).
At least Kasparov, who knows a thing or two about board games seems to think it goes beyond the surface level. And, incidentally, regarding AlphaGo itself, Ke Jie is quoted as having said "After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong... I would go as far as to say not a single human has touched the edge of the truth of Go." (quoted from WP:AlphaGO)
Kawabata Yasunari (and others) emphasize the non-self/non-action or 無為 of playing Go well. Non-self, that would be a strength of an AI.
Of course – if we understand the beauty of the game as being rooted in its relational aspects, as a game between humans, as a competition of minds, that determines a different kind of aesthetics. But the masters seem to appreciate a different kind of truth.
@boris_steipe I think this is slightly different here to what I’m really getting at. For me, the question is whether these systems really learn, and if so, do they learn like humans. It’s possible for games to be completely ‘solved’ by programs, meaning it always gives a ‘true’ answer for how to win, but it doesn’t mean anything was actually learned. Similarly, ChatGPT has almost solved ‘writing fluently’ but clearly doesn’t understand what it says.