While GPT/AI/ML chatbots are all the rage, might I remind an older fact-check I wrote about whether or not GPT-3 is "conscious":
https://tecc.media/claim-gpt3-is-conscious/
> Ascribing any meaningful form of subjective experience or awareness to GPT-3 remains a stretch, to say the least.
>Saying that GPT-3 is conscious because it can generate text that looks like it is human-written, is like saying a large screen TV is a window because it can show images of the great outdoors.
@rysiek I'm somewhat disappointed by that article: it doesn't mention that one can think of a human brain as a Chinese room, and asserts that the whole system of a Chinese room doesn't "understand" Chinese without argument.
@robryk I don't agree we can think of a human brain as a Chinese room. You can of course disagree with that.
But that's a philosophical debate that's been going on for a few hundreds of years (Searle wasn't first to advance a similar argument, too).
At which point along the line of "human brain", "simulation of human brain", "simulation of human brain executed by hand by humans" would you place the line?
What I was mostly disappointed by is silent assumptions, not the opinion.
@rysiek I'm curious whether you think that it's potentially impossible, or whether existence of such a simulation matters for the question assuming that it's possible, or something else (that we'd learn something interesting while developing such a simulation that could provide evidence/arguments here?).
@robryk statements like "X is impossible" tend to be proven false given enough time. So, in the long run, it's *probably* possible.