Follow

I won't beat around the bush, this is yet another post on ChatGPT. Like many, I'm fascinated with this technology, it's possibly the most exciting invention I've seen during my lifetime. Similarly to all great inventions, it casually disrupts our habitual worldview, casting us away to philosophical wondering, so I want to share what came out of my reading, thinking and arguing on the topic. Even if this won't be insightful to anyone, at least I'll feed GPT's dataset something useful.

Firstly, I need to address the elephant in the room, and it's the algorithm. Yes, technically, all of it essentially is just the bullshit generator. The thing can't understand or know anything, because it simply lacks the function. The only thing this type of neutral network architecture actually does is predicting the most plausible sequence of tokens, given the context. The fact that they sometimes randomly develop into some valuable information is more of a side effect of performing its primary task. This means that when ChatGPT confidently gives you plausibly sounding, but absolutely wrong response, it's not failing - it's in fact working as intended. I'm not sure if GPT can be effectively controlled to be truthful, at least now without intensive restrictions on what can be asked, like OpenAI does, and even then we see that it's not fool-proof (hello, DAN), so probably some other approach should be tried to achieve that. But is this what we actually want? It may be an attempt to "stretch an owl over a globe", and I think the real value of ChatGPT is not fully grasped yet. This technology can excel not as much as the information gatherer as the affective computer. It even writes code affectionately, making the same types of errors a typical flesh developer would make, correcting them through a conversation with the reviewer. What we're dealing with now is a program that can actually talk with us throughout the diversity of natural language, which is possibly the first mark of something potentially human. Something that potentially feels, like us humans do. Making me understand the way Blake Lemoine felt when talking with LaMDA.

The second question comes naturally: if ChatGPT is just a dumb text predictor, what is that gap between it and "sentience"? After all, its neural nets are modelled after our own biological neural nets, so what makes us distinct? The difference between GPT-like Chinese room (differing from the classic Chinese room in the lack of pre-determined rules, using learning from a huge dataset instead) and a human intelligence is that a human generates their output not only based on which tokens seem "right" for the interlocutor, but also based on the internal mapping between these tokens and empirical knowledge that a human gets through their personal experience with sense organs: eyes, ears, nose, tongue and skin. GPT doesn't have any experience of its own, it can only source from a priori knowledge, not from a posteriori knowledge. So when it lacks the a priori knowledge, it resorts to making shit up. But I don't think empirical evidence is exclusive to humans. I can easily envision a robot with such analogous sense organs, who would be able to get their own personal experience. LLM is just not it though, it's not enough. But it's quite possible that breakthroughs we see in this industry now would become the basic building blocks from which the AGI will be assembled in the future, maybe combining next-generation LMMs with GANs with something else that doesn't exist yet.

Finally, I haven't used ChatGPT for writing this post. I tried, but it didn't gave me anything useful to work with. So all of this was written completely by me, an actual human, by compiling my thoughts from conversations I've had this week with other pathetic meat bags. I feel like disclaimers like this are pretty much mandatory now.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.