While I am harping on the subject of internet social media based on self-publishing rather than large republishing companies like Fbook, Twitter, etc...

I am quite concerned about the raw records that form the history of our times simply vanishing, irretrievably, as people age and die.

In earlier days things were recorded with relative permanence on paper (or stone). Not so today - one's entire life's work can vanish in few microseconds.

Not just the media or the storage of today are impermanent, but we have created a clock-driven scythe in the form of ICANN's utterly stupid domain name renewal system. There are as many potential domain names as electrons in the universe, but ICANN rules require leasing them in 1 to 10 year increments, thus undermining much of the means of referencing our already weakly permanent digital creations.

(I won't harp on how impermanent and fragile are our database driven, dynamically constructed, web pages.)

@karlauerbach sounds like a good time to bring up where content is located by its hash rather than some location or DNS record.

Really, it sounds like you're trying to use DNS for something it was never intended for, and complaining that it doesn't do well something that it was never supposed to do in the first place.

DNS was not supposed to be about permanence.

@volkris
That's a fair point but dodges the issue.

What if you could have both:
- permanent content assessable data.
- permanent DNS.

Autonomi is coming and offers both in a truly decentralized, autonomous network (and no Blockchain).

(BTW IPFS data isn't permanent, just like DNS it can disappear unless someone is keeping it alive.)

@karlauerbach

@hyolo
Data is stored permanently so anything built on that will be permanent, but can be updated in a none destructive way using the append-only RegisterCRDT data type.

So if you map a name to the address of a register (eg by hashing the name), the register can store the content address pointed to by that name - and all previous addresses - so you get versioned perpetual data. The data pointed to by a register could be anything, not just a website.

@volkris @karlauerbach

@happyborg that honestly sounds like re-inventing a blockchain. How is data stored permanently, by who, and how is it ensured that they play nice? Also where can I learn more, since a search query for "autonomi" returns what seems like mostly unrelated stuff.
@hyolo @volkris @karlauerbach

@Amikke
The implementation is very different to Blockchain and scalable, energy efficient and fast.

The network is comprised of a large number of simple nodes which can run on very lightweight hardware such as Raspberry Pi. Nodes store up to 2GB each in encrypted chunks. Nodes which aren't performing properly will be shunned by others.

There is a white paper and you can ask questions on forum.autonomi.community

WP: 'will insert shortly'

@hyolo @volkris @karlauerbach

@happyborg @Amikke @hyolo @volkris I worked, during the '70s, on capability based machines that were designed to run for decades even with hardware failures. They used various algorithms to detect hardware or software wobbles and could reject members that weren't up to par. We used this as a foundation for secure OS development.

You may also find Dave Farber's Distributed Computer System (DCS) work from the late '60s interesting - it foreshadowed much of web based computing.

youtu.be/Noqf36Fx20s

Follow

@karlauerbach @happyborg @hyolo @volkris it's an extremely interesting topic, mr. M. Ben-Ari's book on concurrent and distributed programming was one of _those_ CS books I enjoyed reading even though it was already some 25 years old at that point. But as I'm sure you know the biggest problem with systems like the one discussed here isn't its distributed nature, nor the possibility that some nodes may fail, but the possibility that some nodes may belong to bad actors and be actively malicious. Trying to work around that is 100% of the reason behind cryptocurrency blockchains' inefficiency, so I'm very interested in how Autonomi is supposed to solve that problem while retaining efficiency.

@Amikke @happyborg @hyolo @volkris The major part of our work went beyond secure operating systems to secure networks. (We were doing this work for a three letter agency located in Maryland.)

We were well aware of issues of nodes going bad or becoming hostile, with an invalid but lingering legacy of trust.

Our work was before public key methods became available (although Witt Diffie was a member of our team, he didn't think up public key until later.)

We also did a lot of work with development of mathematical models of what we meant by "secure" and did formal proofs of correctness of our source code to prove that we met those models.

Some of my work was on debugging these kinds of systems. Getting into a system to figure out what is going awry does resemble a security penetration.

Unfortunately, because of our customers' paranoia our work was wrapped behind security barriers and very little ever was published to the public.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.