I won't beat around the bush, this is yet another post on ChatGPT. Like many, I'm fascinated with this technology, it's possibly the most exciting invention I've seen during my lifetime. Similarly to all great inventions, it casually disrupts our habitual worldview, casting us away to philosophical wondering, so I want to share what came out of my reading, thinking and arguing on the topic. Even if this won't be insightful to anyone, at least I'll feed GPT's dataset something useful.

Firstly, I need to address the elephant in the room, and it's the algorithm. Yes, technically, all of it essentially is just the bullshit generator. The thing can't understand or know anything, because it simply lacks the function. The only thing this type of neutral network architecture actually does is predicting the most plausible sequence of tokens, given the context. The fact that they sometimes randomly develop into some valuable information is more of a side effect of performing its primary task. This means that when ChatGPT confidently gives you plausibly sounding, but absolutely wrong response, it's not failing - it's in fact working as intended. I'm not sure if GPT can be effectively controlled to be truthful, at least now without intensive restrictions on what can be asked, like OpenAI does, and even then we see that it's not fool-proof (hello, DAN), so probably some other approach should be tried to achieve that. But is this what we actually want? It may be an attempt to "stretch an owl over a globe", and I think the real value of ChatGPT is not fully grasped yet. This technology can excel not as much as the information gatherer as the affective computer. It even writes code affectionately, making the same types of errors a typical flesh developer would make, correcting them through a conversation with the reviewer. What we're dealing with now is a program that can actually talk with us throughout the diversity of natural language, which is possibly the first mark of something potentially human. Something that potentially feels, like us humans do. Making me understand the way Blake Lemoine felt when talking with LaMDA.

The second question comes naturally: if ChatGPT is just a dumb text predictor, what is that gap between it and "sentience"? After all, its neural nets are modelled after our own biological neural nets, so what makes us distinct? The difference between GPT-like Chinese room (differing from the classic Chinese room in the lack of pre-determined rules, using learning from a huge dataset instead) and a human intelligence is that a human generates their output not only based on which tokens seem "right" for the interlocutor, but also based on the internal mapping between these tokens and empirical knowledge that a human gets through their personal experience with sense organs: eyes, ears, nose, tongue and skin. GPT doesn't have any experience of its own, it can only source from a priori knowledge, not from a posteriori knowledge. So when it lacks the a priori knowledge, it resorts to making shit up. But I don't think empirical evidence is exclusive to humans. I can easily envision a robot with such analogous sense organs, who would be able to get their own personal experience. LLM is just not it though, it's not enough. But it's quite possible that breakthroughs we see in this industry now would become the basic building blocks from which the AGI will be assembled in the future, maybe combining next-generation LMMs with GANs with something else that doesn't exist yet.

Finally, I haven't used ChatGPT for writing this post. I tried, but it didn't gave me anything useful to work with. So all of this was written completely by me, an actual human, by compiling my thoughts from conversations I've had this week with other pathetic meat bags. I feel like disclaimers like this are pretty much mandatory now.

I feel obligated to add, this is nothing against the man himself. I admire Drew’s work and share a fair amount of his opinions on unrelated topics. I just use his post for the sake of argument to expand my viewpoint.

To mix things up, here’s the rant on laptops I wholeheartedly agree with: drewdevault.com/2020/02/18/Fuc. The state of contemporary laptops is really sad to witness and I also use an x200 as my main laptop for same reasons.

Show thread

Esteemed software developer Drew DeVault just published another one of his signature rants (sadly sworn to be his last one) in the blog post aptly titled “Cryptocurrency is an abject disaster” (drewdevault.com/2021/04/26/Cry). There are fair bits of criticism, albeit mostly concerning just bitcoin and its forks, and some food for thought for current and future developers in fintech. I recognize the problem of mining on CI and I understand author’s frustration with it. What I don’t understand is lashing out at the invention instead of the specific people who abuse it. Moreover, practically every argument listed in the post is stale and has been repeated ad nauseam for more than 10 years already at the same time as the first cryptocurrency was publicly declared dead by the media more than 400 times (99bitcoins.com/bitcoin-obituar). Alas, the cryptocurrency market is still rising as I write this. If Bill Gates couldn’t put an end to it, Drew DeVault certainly can’t either. It’s rising despite everyone in this market already knowing cryptocurrencies suck. It is taken into account as developers use this knowledge to deal with the problems it brings and strive for better technology in the future. It would be absurd to say this industry doesn’t attract a lot of malicious or overly greedy actors, it naturally does by the virtue of being novel, misunderstood and unregulated. On the contrary, it is just as absurd to say that greed is a principal fault of the entire industry and isn’t solvable by making an attempt of understanding the idea, its merit, not being a bastard and not supporting bastards.

But the more I think about it, the more sceptical I become that I’m replying to a point that’s actually argued. What people who passionately hate cryptocurrency argue is that humanity doesn’t need cryptocurrency not only because the technology is problematic, but because it has no value and its price is merely driven by being the Ponzi scheme. To me, this is an obvious nonsense, but it occured to me that it might be pointless to dispute. Not because I lack counterarguments, but because the counterarguments I could use have nothing to do with the initial topic. Cryptocurrencies have value because they effectively liberate global economy by providing a working digital decentralized infrastructure without any government involvement and banking system. If you don’t see why it’s crucial or you disagree that it is, we have the fundamental ideological discrepancy which goes a long way outside of cryptocurrencies and is much harder to handle, involving a significant library of philosophical and political literature of the last couple of centuries.

All in all, cryptocurrency is not going anywhere, because it does have an inherent value, whether you agree with it or not, and the people who oppose it lead a futile fight against progress, which they will ultimately lose. Don’t sweat it.

Requested disclosure: I have recently sold my cryptocurrency stakes for a mild profit and don’t hold any at the moment. I’m paid to work on cryptocurrency projects.

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.