@AkaSci @sundogplanets Extrapolating beyond 2050 lol
@thefranke @aras @phire paper KPI maximizer detected
@marcel @smallcircles @libertyoftheforest With a few more levels of "we host your backups and you host ours", slicing, mixing, and anonymous scattering it is possible to achieve the level of resiliency above and beyond those of centralized silos.
However, this gets into political territory if we really into privacy as those would require extreme levels of anonymous mixing and you'd almost guaranteed to host some **nasty** stuff without a way to kick it out or even detect.
So, p2p is in the bind here: either you're vulnerable to metadata dragnets and association tracking or inadvertently trade resources with someone you wouldn't like (if you had the option to know).
@marcel @smallcircles @libertyoftheforest I'd like to mark data longevity explicitly, even if it can be folded into maintenance.
Yes, you can run quite a few services on a tiny devices now. But one glitch and it's all gone forever.
Centralization does not solve it by itself, but through pooling of resources. It is cheaper to have redundancy at scale - when the effort went into your backup solution and ops is amortized per user.
In a way we have to replicate this aspect but in a distributed fashion. Not only your tiny device should serve your community, but your neighbors too: "I host your backups and you host mine".
#GNUnet has a good story here for privacy and distribution, but they got bogged down on the protocol level.
@marcel @smallcircles @libertyoftheforest Here's this point in detail: https://secushare.org/centralization
@dbattistella This doesn't make sense... But the dude's got a book to sell, can't blame him.
@wilfredh The "Just Enough Typing" section is unclear... I'm struggling to guess what it is intended to convey and how the code is supposed to help.
@reidrac Got frustrated with the experience or it just... didn't deliver?
@morganist @nikitonsky LOL. The image itself is that label.
@dpiponi good job at picturing an angel.. just needs a few more wings, some^W lots of eyes and setting it ablaze. Okay, maybe weave in a silver/golden ring in there too.
@me The feedback loop is important as it the thing that makes the multi-pass iterative improvement possible. An LLM-like model is a closed system and sure, I'll grant that it will bounce around the middle of its prob landscape.
But giving it at least a scratchpad *allows* it to leverage the more powerful and abstract higher-level patterns it learned. And *this* has no limits on novelty, like being turing-complete elevates the system from a level of a thermostat to all the complexity you can eat.
Of course "allows" does not guarantee it would be used effectively. But at least it liberates the system from the tyranny of the "mere interpolation".
@me Polluting the feeds is what we're here for 🥂
That, and the thinking ofc.
@me > what is "intelligence"?
Intelligence is the ability to 1) learn new skills and 2) pick a fitting skill from your repertoire to solve a task.
Rocks don't have this. Thermostats don't have this. Cats have a little. Humans have this. AIs starting to have it. ASIs would have it in spades.
@me > as long as those tasks are within the scope of what we, humans, normally do
This is what I'm trying to contest.
> Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge ... Simply because there is no pattern to apply here, it would be "the first ever" kind of thing.
But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.
> (a list of things that a model can't do)
That would not age well
That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality. What they miss in executive function we complement (for now) with environment. And from what I've seen, the agents are getting closer to actually run as `while True: model.run(world)`. Just as you don't solve math with your cerebellum, the agents don't do "mere interpolation".
@me There is one, thanks for focusing on it in the reply ((=
My claim is that the model training induces meta-learning...
> That was the goal all along - even before LLMs were a thing. OpenAI and DeepMind were on the hunt for making a thing that can learn on the go and adapt. And looks like we've got this by now.
... and that makes the exact content of its pre-training corpus irrelevant. As long as it can pick up knowledge and skills on the go it is intelligent. And the notion of "interpolation" (even in an insanely high-dimensional space) is irrelevant.
Can we please collectively shut up about stochastic parrots, just regurgitating the data, following the training distribution, interpolation, etc etc?
@maridonkers @leobm And it's more fancy cousin `<&>`
Toots as he pleases.