While GPT/AI/ML chatbots are all the rage, might I remind an older fact-check I wrote about whether or not GPT-3 is "conscious":
https://tecc.media/claim-gpt3-is-conscious/
> Ascribing any meaningful form of subjective experience or awareness to GPT-3 remains a stretch, to say the least.
>Saying that GPT-3 is conscious because it can generate text that looks like it is human-written, is like saying a large screen TV is a window because it can show images of the great outdoors.
@rysiek I'm somewhat disappointed by that article: it doesn't mention that one can think of a human brain as a Chinese room, and asserts that the whole system of a Chinese room doesn't "understand" Chinese without argument.
@rysiek I'm curious whether you think that it's potentially impossible, or whether existence of such a simulation matters for the question assuming that it's possible, or something else (that we'd learn something interesting while developing such a simulation that could provide evidence/arguments here?).
@robryk statements like "X is impossible" tend to be proven false given enough time. So, in the long run, it's *probably* possible.
@robryk I get that.
My line is at "human brain" for now. At least until I see a convincing simulation of human brain (yay, cellular automata!).
ML/AI is not a convincing simulation of a human (or any other) brain.