New Web Order

The World Wide Web is constantly undergoing a restless campaign of enshittification; worse design, worse maintenance, worse technical implementations, etc. Web pages, for example, were designed to be 85% of content worth reading, 10% HTML and 5% CSS. Nowadays, we get:

Of course, these issues are only content-wise. If we come to analyze the core technology that powers the modern web, we quickly realize that the "clearnet" is built on a fundamentally centralized infrastructure; at its core, HTTP (Hypertext Transfer Protocol) is the backbone of the web, and it relies on a hierarchical and bureaucratic system that enforces control at multiple levels, such as:

Despite the many occasional attempts at concocting decentralized protocols, the vast majority of internet activity still occurs within this centralized framework, and most of the proposed solutions fail due to both user adoption issues and fundamental design flaws. This article will provide a reimagination of the most important aspects of the web as an attempt to make a roadmap towards a seemingly perfect solution, rather than ignoring the fact that shitty technology exists and doing nothing about it in the name of the proliferating "competing standards."

How shitty standards stick around.

Inappropriate application of xkcd #927.

Solution ~

The web has two core technical problems:

  1. The Protocol (i.e., the clearnet)
  2. The Clients (i.e., web browsers)

1. The Protocol ~

As highlighted above, multiple decentralized protocols have already been made to reinvent the wheel. Currently, all of them have huge defects, but the closest protocol representing how things must work on the web is a blend of I2P and BitTorrent, closely similar to Hyphanet.

The protocol that I have in mind doesn't work with websites, but only for data transmission. There's no fixed way to display content (like web browsers do with HTML), and instead, different clients interpret the data they receive however they choose appropriately. One client might show a chat, another a video, or another a forum post, as will be seen below, keeping things flexible and decentralized.

For transmission, the protocol serves to be an anonymizing overlay network whose data is encrypted and transmitted from one peer to another through a dynamic and smartly randomized multitude of volunteer-maintained nodes, which are determined through their historical performances and thus given different priorities based on efficiency, balancing both anonymity and speed in the network.

In addition, to get rid of monopolization (such as the one on the clearnet regarding site domain names), discoverablity of content on the protocol would be decentralized; every piece of data is identified by a random cryptographic hash. Discoverability in the protocol is up to the people willing to share hashes, through manual exchange, search engines, forums, trackers, etc., and it's not managed by the protocol itself.

Moreover, the proposed protocol relies on decentralized (i.e., peer-to-peer) data distribution, where any device with access to specific data can help distribute it if both the sender(s) and recipient are online, similar to how torrenting works. Plus, to improve availability, peers should be able to designate and manage external servers, both centralized or decentralized, to host data 24/7.

2. The Clients ~

Instead of a "web browser" through which everything on the web can be accessed directly, different programs (or clients) should present different aspects of data on the web, each client with its own unique implementations:

As a matter of principle, these clients should preferably be simple at a primitive level, only including absolute necessities and without introducing features that replicate another software's job; it just clutters the UI beyond recognition (with its layers upon layers of toolbars, dropdown menus, and buttons), increases the loading time, makes bugs and security issues more likely (and thus tiring the developers to keep up with open issues), confuses, disgusts, and distracts the users, makes the software harder to use in the way it was originally intended for, and decreases efficiency since the feature won't do it as well as the actual programs built for that sole purpose, and it won't be used anyway for most of the time by most users.

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

James Zawinski

Putting more than one type of functionality in any program which is unnecessary and irrelevant to its usage is pointlessly duplicating the job of the window manager. On the other hand, every feature relevant to the usage of certain programs should simply be ruled out as extensions (i.e., plugins, patches, etc.), so they can be excluded from the core (thus nullifying interdependency), tested separately, scrapped easily, and removed or replaced at any time by the user's own will without fear of any problem.

Another important point is that any extension should always be relevant to the usage of a specific software (in a sense) to avoid piling shit upon shit, which is prevalent in many software like Emacs and Neovim; sometimes a text editor is just a text editor, not an operating system. I understand that vanilla Neovim is unusable for coding without LSP and a fast file management system, but with useless plugins (like a frontend for YouTube or a minigame, regardless of whether they're too "bloated" or not), I don't think it'll have anything to do with "text editing" or "programming" anymore, might as well just pull out an entire ledger of NodeJS dependencies and Rust crates which let you generate 1000x times more AI code by spending hours to enable the pissenshitten function in the poopenfarten configuration files of your boykisser.nvim plugin. I'm not being salty, it's just the truth that you're taking with a mountain of salt. I'm so sorry. </3