@enkiv2 terrifying end to the Independence of Cyberspace, and a bad move.

that many fools will cheer. such arsonist burn it down feelings in so many. wanting to blame, seeking the image of oppressors. we have only ourselves to blame for not making better. which we could. at any time. thanks to the great web browsers & open internet that is very much still alive.

@jauntywunderkind420

Which "great browsers" are you talking about?

's Chromium or Google's ?

@enkiv2

@Shamar @jauntywunderkind420

Not only does the "open web" not exist, but it has never existed (basically because the URL/URI division was never properly made & therefore we're at the mercy of big hosts & domain registrars).

As soon as TBL decided to put the hostname of a server as part of an HTTP URL, centralization under the stacks became inevitable.

Follow

@enkiv2

Well to be fair, it all depends in how hostnames are resolved.

For example adopts a decentralized Name System where each user manages their own zone.

@jauntywunderkind420

@Shamar @jauntywunderkind420

If the hug of death is a thing -- i.e., if too many people wanting a particular piece of data causes that piece of data to become impossible for any of them to fetch -- you immediately have a re-centralizing force that favors people who can afford beefier machines and fatter pipes.

@enkiv2

Well also caching matters.

In HTTP, intermediate proxies were able to offload the server.

One of my strongest criticism of HTTPS is that it's designed to force users to always connect the server, disabling proxy caching.

That hurts users (because the server track them) and smaller websites (that needs more bandwidth and computing resources).

@jauntywunderkind420

@Shamar @jauntywunderkind420

Caching matters, absolutely. And big websites spend lots of money on CDN services that cache their content physically near different ISPs, to decrease both load and latency.

A CAN, in comparison, gives you this kind of caching for free, in proportion to the popularity of a hash, while simultaneously makes tracking impossible (because you can't make literally every node collude with you).

Plus it removes server-side dynamic content, which shouldn't exist anyway.

@Shamar @jauntywunderkind420

The correct way to cache is to use the same mechanism for addressing as for integrity checking, and then fetch permanent immutable objects from whoever nearby has happened to download them.

Then, in the extremely rare event that some dynamic behavior needs to be centralized on a remote machine, that machine should be updating references to permanent objects and returning those references.

@enkiv2 @Shamar not default but once we start to use the capability we can build the alternative transport that does prefer local caches.

@enkiv2 @Shamar if a user can download a signed page of assets (what webpackage does) and another user can take it& open it& be at that same url, see that it is signed... that IS a CAN, one using urls as it so happens

@enkiv2 @Shamar I really wish we wouldn't take narrow views of what tech is & isn't for, be willing to accept expanding vision & expanding possibilities

@enkiv2 @Shamar the old web wasnt that great. fundamentally it just meant all state lived on the server. all forms all processes, all of it was far off.

the page as a browser of informations is to me much truer. the page has an obligation it's many implementations alas fail to fulfill, the contract to keep the url of it's current visible thing active. the viewer still has to cite what it is viewing. the document has a primacy. and pages fall fr that grace. and that's bad. but the architecture isn't bad.

@enkiv2 @Shamar in fact, to me, the chief failing of that page-as-information-browser architecture we've arrived at is that most pages only view content from a single host. the site hosts it's browser, but that browser only can operate in the confines of the host. a silly limitation.

@enkiv2 @Shamar the most exalted web apps are information-browsers across sites. Rss readers. Podcast listeners. (Non-closed) Social networks.

@enkiv2 @Shamar documents are wonderful but the toolkit to understand and view them should not be absolute. a document to me only suggests it's preffered viewers

@enkiv2 @Shamar a lot of points about the dynamism of the page, the nature of the web. what's still somewhat missing from the pieces i've laid down is the web architecture.

the web to me isn't just docs & links, it's also an auto generative system of more links & more docs. it's the ability for pages to be enrichening the constellation of information with new points, new links. the page is also a tool to work & build within the constellations it can be a part of.

@jauntywunderkind420 @Shamar

So long as it depends upon a particular domain pointing to a particular IP that remains accessible, there's an expensive bottleneck that removes most of the benefits of a CAN. Webtech is *very* locked into the whole hosts-and-domains thing.

Now, you *could* put a p2p implementation in a browser, and some folks have -- but why bother, when it's so much harder than a native application to implement & maintain?

@enkiv2 @Shamar beaker browser being a one man operation proves it's really not that hard to sub out the transport protocol. I think too many smart would be hackers are utterly deluded & drinking anti-web poison Kool aide, part of a cult that thinks everything needs to be burned down & rebegun. absolutely shit house loser go nowhere attitude has fucked the p2p interested world super hard.

@enkiv2 @Shamar dat also has extensions for other browsers.

you're also, pardon, wrong. webpackage works while offline, let's you give content to someone on a floppy & then open it, in a secure origin fashion at that address, all while offline. the host to ip binding issue is real, as dns mandates content expire & be refreshed after 7 days, but that's going to be like 5 lines of code in the browser that *will* I promise get hacked then changed so users can circumvent & use their downloaded content without dialing home.

@enkiv2 @Shamar a lot a lot a lot of people in tech focused on can't and wonts and wrongs & not nearly enough willing to be open, to dare to imagine how something may

@jauntywunderkind420 @Shamar

Beaker is a patched chrome (like every other major browser except firefox, which is a patched netscape), because it's no longer possible for even a large expert team to create a modern browser from scratch. That's very scary. An individual can create a modern OS, soup to nuts, from scratch in a couple years, in comparison.

@jauntywunderkind420 @Shamar

Because of this, "using webtech" functionally means adding a dependency (of a size hundreds of times larger than anything you could possibly write) on Google-controlled code (of complexity you cannot possibly ever audit).

Folks have a number of reasons for doing this. One is that they are unfamiliar with other stacks. One is that they implemented a website first & want to reuse code. One is that they think this is the most reliable way to be cross-platform.

@jauntywunderkind420 @Shamar

I don't consider the first two good excuses if the goal is to write good software. They are perfectly fine excuses in a capitalist context where you aren't allowed to learn new tools or fix technical debt because you'll be fired if you don't have an implementation by the end of the sprint, but software written under such constraints cannot reasonably expected to be good.

The cross-platform concern is BS. All modern languages are cross-platform.

@jauntywunderkind420 @Shamar

Nothing actually needs to be burnt down. Webtech can be outcompeted, because the web stack is a bad fit for applications & using it wastes enormous amounts of engineer time (on top of also wasting computational resources in excess of any other popular stack, compared to the work it performs).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.