While I am harping on the subject of internet social media based on self-publishing rather than large republishing companies like Fbook, Twitter, etc...

I am quite concerned about the raw records that form the history of our times simply vanishing, irretrievably, as people age and die.

In earlier days things were recorded with relative permanence on paper (or stone). Not so today - one's entire life's work can vanish in few microseconds.

Not just the media or the storage of today are impermanent, but we have created a clock-driven scythe in the form of ICANN's utterly stupid domain name renewal system. There are as many potential domain names as electrons in the universe, but ICANN rules require leasing them in 1 to 10 year increments, thus undermining much of the means of referencing our already weakly permanent digital creations.

(I won't harp on how impermanent and fragile are our database driven, dynamically constructed, web pages.)

Follow

@karlauerbach sounds like a good time to bring up where content is located by its hash rather than some location or DNS record.

Really, it sounds like you're trying to use DNS for something it was never intended for, and complaining that it doesn't do well something that it was never supposed to do in the first place.

DNS was not supposed to be about permanence.

@volkris
That's a fair point but dodges the issue.

What if you could have both:
- permanent content assessable data.
- permanent DNS.

Autonomi is coming and offers both in a truly decentralized, autonomous network (and no Blockchain).

(BTW IPFS data isn't permanent, just like DNS it can disappear unless someone is keeping it alive.)

@karlauerbach

@happyborg you can't have permanent content accessible data. That is a pipe dream. All data accessibility relies on someone being around who is willing to expend resources to serve it, and that cannot be guaranteed.

So first step is to get away from talking about permanence, which is just not something that can be promised.

Second step is to separate different roles being provided by different tools here.

DNS does not serve the content--That's just not what that tool does, not the role it plays-- so you can't really talk about keeping data alive in the same context as DNS. DNS doesn't do that in the first place.

In the end, you're free to run your own DNS. Any of us can start our own name servers to provide whatever lookup we want, for as long as we want.

I think you are just really confusing a lot of different topics here.

@karlauerbach

@volkris @happyborg I have run my own DNS servers, even had my own TLD. (cavebear.com/eweregistry/ )

The core part of most URLs is the domain name part. Invalidate that and the URL becomes useless and the thing it points to becomes an orphan.

ICANN's year-by-year rent pretty much guarantees that links will eventually die as people die, thus orphaning content.

Preservation after death isn't easy - one needs storage & form that are likely to persist. Dynamic content - Wordpress, etc are built piles of javascript, Python, Perl, and database SQL. Those, as any programmer knows, require maintenance (witness Python 2 replacement by somewhat incompatible Python 3.)

We have already largely lost a huge amount of content in Flash. Yes, techies can figure out how to get vanilla Flash to play, but for most people it's a non-starter.

Many of us can no longer read Zip drives, floppies, etc (I even have some old paper tapes.)

And TLS algorithms are now-and-then depreciated, putting content onto unreachable islands.

@happyborg @volkris I really want permanent DNS, take a look at my somewhat joke, but also serious proposal at cavebear.com/eweregistry/

Permanent content is a hope more than a reality - too many formats keep changing. That's why I have my own stuff in HUGO which uses flat file input to generate a tree of very portable files that can be run - just via a directory copy (tar, rsync, etc) as a Document Root under Apache or Ngnix or pretty much any other web server.

Accessibility is also troubled - IP addresses change whether IPv4 or IPv6. That's why DNS permanence is important. But even TLS is troubled - the IETF has deprecated digest algorithms and that has left some web services (particularly on IoT devices) unusable by many browsers. (I preserve some old systems as VMs just to solve this issue.)

@karlauerbach
We're on the same page and I'm working to make this a reality. Autonomi is a project I've been helping out with for a decade because of that, and I'm working on a demo in this area at the moment. So watch this space!
@volkris

@karlauerbach but if IP addresses change, DNS permanence is undermined by that other weak link in the chain.

Again, I don't think you're using the right tool for the job, and then complaining that the tool doesn't work well.

@happyborg

@karlauerbach@sfba.social @happyborg@fosstodon.org @volkris@qoto.org

If you lose that certificate you will no longer have the ability to manage and control your domain name

If it's going to be a centralised registry then you might as well have verification by means other than a public key as an option.

@hyolo
Data is stored permanently so anything built on that will be permanent, but can be updated in a none destructive way using the append-only RegisterCRDT data type.

So if you map a name to the address of a register (eg by hashing the name), the register can store the content address pointed to by that name - and all previous addresses - so you get versioned perpetual data. The data pointed to by a register could be anything, not just a website.

@volkris @karlauerbach

@happyborg that honestly sounds like re-inventing a blockchain. How is data stored permanently, by who, and how is it ensured that they play nice? Also where can I learn more, since a search query for "autonomi" returns what seems like mostly unrelated stuff.
@hyolo @volkris @karlauerbach

@Amikke
The implementation is very different to Blockchain and scalable, energy efficient and fast.

The network is comprised of a large number of simple nodes which can run on very lightweight hardware such as Raspberry Pi. Nodes store up to 2GB each in encrypted chunks. Nodes which aren't performing properly will be shunned by others.

There is a white paper and you can ask questions on forum.autonomi.community

WP: 'will insert shortly'

@hyolo @volkris @karlauerbach

@happyborg @Amikke @hyolo @volkris I worked, during the '70s, on capability based machines that were designed to run for decades even with hardware failures. They used various algorithms to detect hardware or software wobbles and could reject members that weren't up to par. We used this as a foundation for secure OS development.

You may also find Dave Farber's Distributed Computer System (DCS) work from the late '60s interesting - it foreshadowed much of web based computing.

youtu.be/Noqf36Fx20s

@karlauerbach @happyborg @hyolo @volkris it's an extremely interesting topic, mr. M. Ben-Ari's book on concurrent and distributed programming was one of _those_ CS books I enjoyed reading even though it was already some 25 years old at that point. But as I'm sure you know the biggest problem with systems like the one discussed here isn't its distributed nature, nor the possibility that some nodes may fail, but the possibility that some nodes may belong to bad actors and be actively malicious. Trying to work around that is 100% of the reason behind cryptocurrency blockchains' inefficiency, so I'm very interested in how Autonomi is supposed to solve that problem while retaining efficiency.

@Amikke @happyborg @hyolo @volkris The major part of our work went beyond secure operating systems to secure networks. (We were doing this work for a three letter agency located in Maryland.)

We were well aware of issues of nodes going bad or becoming hostile, with an invalid but lingering legacy of trust.

Our work was before public key methods became available (although Witt Diffie was a member of our team, he didn't think up public key until later.)

We also did a lot of work with development of mathematical models of what we meant by "secure" and did formal proofs of correctness of our source code to prove that we met those models.

Some of my work was on debugging these kinds of systems. Getting into a system to figure out what is going awry does resemble a security penetration.

Unfortunately, because of our customers' paranoia our work was wrapped behind security barriers and very little ever was published to the public.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.