Which communication protocol or open standard in software do you wish was more common or used more?

Cyclohexane@lemmy.mlmod to Linux@lemmy.ml – 257 points –

Whether you're really passionate about RPC, MQTT, Matrix or wayland, tell us more about the protocols or open standards you have strong opinions on!

350

RSS. It's still around but slowly dying out. I feel like it only gets added to new websites because the programmers like it.

Theres quite a few sites that still use it and existing ones in the Fediverse have it built in (which is really cool). But your right, the general public have no concept of having something download and queue up on a service rather than just going to the site. And the RSS clients are all over the place with quality...

WebSub (formerly PubSubHubbub). Should have been a proper replacement for RSS with push support instead of polling. Too bad the docs were awful and adopting it as an end user was so difficult that it never caught on.

I still want something push based (without paying for those rss as a service)

It's part of the RSS 2.0 standard. Of course it requires adoption by feed publishers.

rssCloud

Oh neat! I didn't know this existed. By any chance, do you know of any RSS readers that have implemented it?

No I'm sorry, I pull my feeds manually using a barebones reader. I'm guessing your best bet is one of the web-based readers as it would require a client with a TCP port that's reachable from the web. I have never seen a feed who provided the rssCloud feature though.

1 more...
1 more...
1 more...
2 more...

It's seen its renneisance recently

How so? Outside very niche stuff or podcasts I just don’t seem to it used that often.

Most websites still use standard back ends with RSS support. Even static site generators also do it. The only difficulty is user discovery.

Yeah… It always being there hardly makes it a “renaissance”, no?

Sadly so many rss feeds are just the first paragraph and not the whole article

2 more...

IPv6. Lack of IPv4 addresses it's a problem, specially in poorer countries. But still lots of servers and ISPs don't support it natively. And what is worse. Lots of sysadmins don't want to learn it.

Am sysadmin, can confirm I don't wanna learn it.

Also a sysasmin, really don't wanna learn it...or have to type it on the daily

My university recently had Internet problems, where the DHCP only leased Out ipv6 addresses. For two days, we could all see which sites implemented ipv6 and which didn't.

Many big corpo sites like GitHub or discord Apperently don't. Small stuff like my personal website or https://suikagame.com do.

github is so stupid with that, it's actually funny

Lots of really large sites are horribly misconfigured. I had intermittent issues because one of the edge hosts in Netflix ‘s round robin dns did not do MTU discovery properly.

IPv6 is great, but NAT is quite functional and is prolonging the demise of IPv4.

My isp decided to put me behind a CGNAT and broke my access to my network from outside my network. Wanted to charge me $5 a month to get around it. It's not easy to get around for a layman, but possible. More than anything it just pissed me off that I'd have to pay for something that 1 day ago was free.

How can you bypass CGNAT?

Set up a reverse proxy on another machine (like one of those free oracle cloud things). I can't go into detail because I don't know exactly how. I think cloudflare also has options for that for free. Either way it's annoying.

Cloudflare tunnel, and its alternatives, such as localXpose, altho the privacy is probably questionable, and a many of them require a domain.

17 more...

Say this to my very large Canadian ISP who still doesn’t support IPv6 for residential customers. Last I checked, adoption in Canada was still under 50%.

50%?? I fucking wish. In Spain we are at 5%. I finally got IPv6 in my phone this year, but I want it in my home, which is still only available as IPv4 even if they're the same ISP.

17 more...

Markdown. Its only in tech-spaces that its preferred, but it should be used everywhere. You can even write full books and academic papers in markdown (maybe with only a few extensions like latex / mathjax).

Instead, in a lot of fields, people are passing around variants of microsoft word documents with weird formatting and no standardization around headings, quotes, and comments.

Markdown is terrible as a standard because every parser works differently and when you try to standardize it (CommonMark, etc.), you find out that there are a bajillion edge cases, leading to an extremely bloated specification.

Agreed in principle, but in practice, I find it's rarely a problem.

While editing, we pick an export tool for all editors and stick to it.

Once the document is stable, we export it to HTML or PDF and it'll be stable forever.

Most ppl have settled on Commonmark luckily, including us.

Commonmark leaves some stuff like tables unspecified. That creates the need for another layer like GFM or mistletoe. Standardization is not a strong point for markdown.

I believe commonmark tries to specify a minimum baseline spec, and doesn't try to to expand beyond that. It can be frustrating bc we'd like to see tables, superscripts, spoilers, and other things standardized, but I can see why they'd want to keep things minimal.

Asciidoc is a good example of why everything should be standardized. While markdown has multiple implementations, any document is tied to just one implementation. Asciidoc has just one implementation. But when the standard is ready, you should be able to switch implementations seamlessly.

Have you read the CommonMark specification? It’s very complex for a language that’s supposed to be lightweight.

What's the alternative? We either have everything specified well, or we'll have a million slightly incompatible implementations. I'll take the big specification. At least it's not HTML5.

An alternative would be a language with a simpler syntax. Something like XML, but less verbose.

And then we'll be back to a hundred slightly incompatible versions. You need detailed specifications to avoid that. Why not stick to markdown?

Not if the language is standardized from the start.

Sure it will. It will be a detailed language from the start.

Man, I've written three novels plus assorted shorter form stories in markdown.

There's a learning curve, but once you get going, it's so fluid. The problem is that when it comes time to format for release, you have to convert to something else, and not every word processor can handle markdown. It's extra work, but worth it, imo.

Just set up pandoc and Bob's your uncle. It'll convert markdown to anything. You'll never have to open another word processor.

Nice! Thanks for the tip!

Edit: holy shit, how have I never run across that before? That's a brilliant program right there.

Pandoc + [your markdown editor of choice] is magic. Some editors even come with Pandoc as a dependency so you can export to more or less anything from the GUI. I think GhostWriter and Zettlr at least (I honestly can't be sure, I've changed editors so often and now I just have some Pandoc conversion scripts in my file manager menu).

For sure, I bet full fledged editors like word don't even let you import it.

2 more...

I think Obsidian and Logseq are helping to change this.

Markdown is awesome, I agree! I did not realize you could extend markdown with anything other than html. The html extension is quite nice to do anything that markdown doesn't support natively, but I wish there was an easier way to extend markdown. Maybe the ones you listed are what I need.

1 more...

Depends on the type of book. Since you need HTML for all non default styles. Therefore, it raises the bar... you need a bit of web dev knowledge which removes the biggest benefit of markdown: simplicity / ease of use.

I frigging love markdown for everything!

My main wishlist for markdown, is a better live collaborative markdown editor. Hedgedoc works, but it's showing it's age, and they don't seem to be getting close to releasing v2.

Etherpad also has a markdown extension, but it doesn't import / export that well.

It is too basic. I guess something more full-fledged like... typst?

ReST (restructured text) is a good middle ground. I just wish it had more support outside of the python community. It could use some new/better tooling than Sphinx

3 more...

Unified Push.

Unbelievable that we have to rely on Google and co for sth as essential as push messages! Even among the open source community, the adoption is surprisingly limited.

Nobody knows about unifiedpush. Last time I checked, their Linux dbus distributor also wasn't ready. There has to be a unified push to get it adopted.

1 more...
2 more...

IPv6

I mean, why the hell is IPv4 still a thing??

Because SecOps still thinks NAT is security, and NetOps is decidedly against carrying around that stupid tradition.

You can even Nat still if you want too lol

That said have you looked at securing ipv6 networks?It can be a lot of new paridgms that need to be secured.

Try to remember a handful of them

In the world of computers, why would remembering numbers be the stop for new technologies?

Do you remember anyone's public key? Certificate?

I don't even remember domain (most) names, just Google them or save them as bookmarks or something.

The reason IPv4 still exists is because ISPs benefit from its scarcity. Big ISPs already paid a lot of money to own IPv4 addresses, if they switched to IPv6 that investnywould be worthless.

Try selling static IPv6 addresses as they do now with IPv4. People would laugh at them and just get a free IPv6 address from an ISP that wants to get new users and doesn't charge for it.

The longer ISPs delay the adoption of IPv6, the longer they can milk IPv4 scarcity.

I don't even remember my old ICQ UIN. People usually do that.

So yes, bring in IPv6.

Which ISPs offer IPv6 for free?
Asking for a friend.

IPv6 addresses are practically endless, therefore their value is practically 0. ISPs justify charging extra for static IPv4 because IPv4 addresses do have a value.

If ISPs charge for static IPv6, then one of them could just give that service for free (while keeping the rest of the prices the same as their competitors). That would get them more customers while costing them nothing.

EDIT: I can't give you an example of an ISP that offers free static IPv6 because there are no ISPs in my country that offer IPv6.

Should be every single one that supports IPv6.

For that matter, you should be getting an entire /60 at a minimum. Probably more like /56.

Shortening rules actually make IPv6 addresses easier to remember than IPv4. Just don't use auto configuration.

damn if only we had a service that like, obfuscated and abstracted these hard to remember IPs that aren't very user friendly, and turned them into something more usable. That would be cool i think. Someone should make that.

Some kind of name system surely.

perhaps one that were to operate on like, a domain level, maybe.

gah, i'm just not too sure there's a good term for this though.

2 more...
7 more...

Do Not Track

Such a simple solution for the cookie banner issue. But it prevented websites from tricking users into allowing them to gather their data, so it had to go.

Nobody was going to honor that. That's just giving them an extra bit of data to track you with.

It could be forced by law

Globally?

Those cookie banners were introduced because of an EU law and are seen all over the world

Most of those cookie banners are not even needed, you only need them for tracking cookie, not login and session cookies. But of course everyone decided it is just easier to nag all the users with a big splash screen.

A lot of them are not even doing it right, you are not allowed to hint the user that accept all is the "correct" choice by having it in a different color than the others. And being able to say no to all shouls be as easy as accepting all, often it isn't.

Basically, cookie banners are usually not needed and when they are they are most often incorrectlt designed (not by accident).

But of course everyone decided it is just easier to nag all the users with a big splash screen.

Nope, the thing is, you'll very rarely find a website that only uses technically necessary session/login cookies. The reason every fucking website, yes, even the one from the barber shop around the corner, has a humongous cookie banner is that every fucking website helps google and other corporations to track users across the whole internet for no reason.

Yes, seen by people visiting EU websites or companies with an EU presence. And because whether or not they assign a cookie is easily verifiable by the person on the other end.

RSS (RDF Site Summary or Really Simple Syndication) It is in use a fair amount, but it is usually buried. Many people don't know it exists and because of that I am afraid it will one day go away.

I find it a great simple way to stay up to date across multiple web sites the way I want to (on my terms, not theirs) By the way, it works on Lemmy to :)

Honestly there is rarely a blog I want to follow that doesn't have it. I do think it would be great to have more readers using it so that it becomes more significant, but for my reading it is actually pretty great.

odf/odt/ods

.md

SimpleX

Matrix

OpenPGP

Last, certainly not least... ActivityPub

Markdown really should have more widespread support than it does. It's just the right mix between plain text and an office document, I took my college notes with it in fact cause of how fast it was to format stuff. But as far as I know, there's no default program on any of the (major) OS's or Distros for viewing it.

Maybe it's just due to a lack of standards for formatting or something, but regardless I do wish it was used and supported more.

markdown is standardized? I haven't found two parsers that parse the same file the same for any but the most trivial documents

That's what I mean by a lack of a standard for markdown. There needs to be at least a core standards for stuff (like bolding and italics), that is universal across stuff. Then if a program wants to add onto it, that's fine. But just the core parts being standardized would help a lot.

There are some pseudo-standards for it. Github-flavoured markdown is probably the biggest of them. Then you get things like Obsidian-flavoured markdown that is based off of Github’s.

Heads up for anyone (like me) who isn't already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added "linux" into the search on a lark.

Anyway it's a chat protocol

I am so confused on how simpleX works

going based on preliminary understanding of this shit, it looks like it does all of the user handling on the client side explicitly, server side probably doesn't do anything of the significant sort.

Or at least to a degree that provides reasonable assurance that X person is different from Y person based on the messaging alone. Though your typing style is going to significantly influence it regardless of that.

probably not accurate, just what i gleaned in about 3 minutes.

XMPP

I came here to say matrix but I'm not gonna lie. If XMPP had gotten the traction it deserved we wouldn't need matrix.

Why not matrix?

You're going off-topic from the OP question :-) But to answer your new question : I do not trust Matrix enough when it comes to privacy. I know that this link is old but still. https://disroot.org/en/blog/matrix-closure

Then again I do not trust Signal that much either but sometimes compromises need to be made to get things done. With XMPP the end user can host their own server if they wish to, without meta data going to a centralized point. And video calls via XMPP and Conversations were a pleasure to use when I used it during the Covid-19 pandemic.

IOT devices shouldn't connect to wifi. ZWave or zigbee is much better suited to IOT stuff, but it seems to mostly get adopted in very limited, locked down proprietary shit like Hue Lights.

There's only one case I've found where Wi-Fi use seems acceptable in IoT: ESPHome. It's open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

I still wouldn't call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn't exactly suited for a battery-powered device that's expected to run 24/7 regardless.

Yes but at least Hue (and IKEA and LIDL and many other brands') lights work well with open Zigbee coordinators, like deconz and ZHA in Home Assistant.

I wish there were more Zigbee and Zwave and less WiFi IoT devices too. I don't even have a Zwave coordinator because I never found anything I wanted with Zwave support.

LaTeX. As someone in academia, I absolutely love it. It has some issues like package incompatibility, but it's far far better than anything else I've used. It's basically ubiquitous in academia, and I wish it were the case everywhere else as well.

What about Typst?

The Typst compiler is open source. It is the open core of the web app and we will develop and maintain it in cooperation with the community

Try Typst now!

Create a free account to join the public beta.

Beta software marketing with "free accounts" and an open core compiler for a (probably) future paid web service tells me all I need to know.

Even though LaTeX has issues, not being an online service is not one of them.

They host a proprietary service that does all the stuff, the compiler and spec are completely FOSS. So you need to create your own implementations, which is not hard.

I dont think they will close source the compiler. And thats basically everything thats needed?

I have 0 problems with people creating a fancy proprietary implementation to get people hooked. I will never use an online editor, but why care?

Learning LaTeX and working around its quirks seems like a much better time investment than sidegrading to something that lives on premises given by a proprietary commercial project. If someone saw LaTeX and said "I want to make some version of this that is better", without alterior motives, they would probably just work on improving LaTeX (which a whole lot of people do).

Fancy does not mean better, and often is in many ways worse than plain old boring.

4 more...

or you could also just make an open source wrapper for latex and call it a day.

Nothing needs to be closed source to get people to use it.

And it isnt :D the compiler produces PDFs which can be read with anything. The spec is open so you can write the code with any editor.

Just needs integration, will see if I can add the syntax highlighting to Kate

i suppose that's the case, but if you ever partially open source something, i think you're probably trying a little too hard.

4 more...
4 more...
4 more...

It's not a standard but still its an interesting software so I'll post this here:

Joking aside, I love and hate it. Its paradigm is almost like using the C preprocessor to build a really awkward Turing-machine. TeX/LaTeX does a great job of what it was intended to do; it applies high quality typesetting rules to complex material and produces really good results. I love the output I can get with it and I will be eternally grateful that Donald Knuth decided to tackle this problem. And despite my complaints below, that gratitude is genuine. Being able to redefine something in a context-sensitive way, or to be able to rely on semantics to produce spacing appropriate to an operator vs a variable etc; these are beautiful things.

The problem is, at least once a day I'm left wishing I could just write a callable routine in a normal language with variables, types, arrays, loops and so on. You can implement all those things in TeX, but TeX doesn't have a normal notion of strings, numbers or arrays, so it is rare that you can do a complicated thing in an efficient way, with readable code. So as a language, TeX frequently leads to cargo-cult programming. I'm not aware that you can invoke reflection after a page is output, to see what decisions on glue and breaks were made; but at the same time you can't conditionally include something that is dependent on those decisions, since the decision will depend on what is included. This leads to some horrible conditionals combined with compiling twice, and the results are not always deterministic. Sometimes I find it's quicker to work around things like that by writing an external program that modifies the resulting PDF output, but that seems perverse.

At the same time, there's really nothing else out there that comes close to doing what LaTeX does, and if you have the patience, the quality of documents it can produce is essentially unbounded. The legacy of encodings, category codes, parameter limits, stack limits etc. just makes it very hard for package writers, and consumes a great deal of time for a lot of people. But maybe I am picky about things that a saner person would just live with.

A lot of very talented people have written a lot of very complex packages to save the user from these esoteric details, and as a result LaTeX is alive and well, and 99% of the time you can get the results you want, using off-the-shelf parts. The remaining 1% of the time, getting the result you want requires a level of expertise that is unreasonable to expect of users. (For comparison, I wrote an optimising C compiler and generally found it far easier to make that work as expected, than some of the things I've tried, and failed, to do properly in LaTeX. I now have a rule; if getting some weird alignment to work takes me more than an hour, I just fake it with a postscript file, an image, or write an external program to generate it longhand, in order to save my sanity.)

I think (and certainly hope) that LaTeX is here to stay, in much the same way that C and assembly language are. As time moves forward I think we'll see more and more abstractions and fewer people dealing with the internals. But I will be forever grateful to the people who are experts in TeX, and who keep providing us with incredible packages.

I honestly just use it for my resume with a template I found, so my knowledge is extremely basic, but I really do love the concept that I can “compile” and actually see the source of my document’s formatting.

1 more...

Is it practical outside of academia? I heard the learning curve is kinda big

Nope and yep. It's an incredible tool, but it's got a vim-sized learning curve to really leverage it plus other significant drawbacks. Still my beloved one-and-only when I can get away with it, but its a bit of a masochistic acquired taste for sure.

Template tweaking, as I imagine academia heavily relies on, is really the closest to practical it gets. You do still get beautiful results, it's just hard to express yourself arbitrarily without really committing to the bit.

Outside of academia, would you say it still provides significant upside over markdown?

Markdown and LaTeX are meant for entirely different purposes. It's somewhat analogous to HTML vs PDF. While it's possible to write books with Markdown, it's a vastly inferior solution compared to latex or typst (for fixed format docs like books).

It’s got a vim-sized learning curve to really leverage it

As a regular vim user, I have to say. Vim makes sense after you put some effort into learning it. I can't say the same about latex.

For me it's more pleasant than editing formulae in LO, but still took a lot of time.

I wrote my masters in LaTeX and while I appreciate the structuredness and the fact I could use vim, it was so quirky. Having to spend half an hour to fix a non obvious compile error, more than once, was a big distractor. I'm sure it gets better when you use it more but I don't think I have ever used it since. I'm not in academia and I don't need to solve compile problems when creating an invoice or writing a letter to local government.

7 more...

I was actually surprised to find out QUIC is fairly close to being default.

Wikipedia

HTTP/3 uses QUIC, a multiplexed transport protocol built on UDP.

HTTP/3 is (at least partially) supported by 97% of tracked web browser installations (thereof of 98% of "tracked mobile" web browsers), and 29% of the top 10 million websites.

2 more...
2 more...
  • Communication: Matrix
  • Browsing: I2P
  • Communities: ActivityPub / Mastodon
  • Software Forge: Fogejo + ForgeFed
  • OS: Linux
  • Money: Monero

Since they meet at least one of,
if not all of the following:

  • Decentralized / Federated
  • Sensorship resistant
  • Privacy respecting
  • Open source

I2P. Current protocols should go through it

Anonymous lemmy, anonymous torrents, anonymous IPFS, anonymous eMule, anonymous streaming, anonymous source forges, anonymous chats, anonymous everything...

Imagine unbridled, anonymous, mainstream piracy, software development, sitehosting, communication, social media.

Anti Commercial-AI license

i2p is pretty cool. One of the more interesting projects out there. Like tor, though i'm preferential to the weird ones.

there is also GNUNET which seems to be in perpetual development, perhaps one day that will see something interesting happen.

1 more...

Remember SOAP? Remember XML-RPC? Remember CORBA?

Those were not very good.

I had to do some soap integration last year and it feels like it only got worse with age.

I've worked with all of them and hate all with a passion. SOAP wasn't bad in theory but lots of APIs and clients didn't implement it properly.

peer to peer, i would be happier thitking that every time i open somo application, i'm helping it, like i2p

Ever heard of IPFS? I really hope that will take off some time.

Unfortunately the reality of IPFS is that despite its huge funding it was poorly designed from the start and still to this day has much slower loading times then my I2pd instance (despite i2p transmiting messages through multiple encrypted proxies), to the point where the company working on the rust implementation determined it was so bad they had to scrap the whole thing to make something that actually worked. Not to mention that I managed to have my server taken over by some kind of malware by downloading a particular piece of content.

Thanks, that was an interesting read! I always felt IPFS wasn't ready yet, but the value it tries to provide of being a file system, I've found no real alternative to. Very good to read that iroh is willing to look beyond the IPFS spec to provide its values with better performance. I hope it works out.

Others have said already, but XMPP and RSS. Also, nobody mentioned NNTP yet.

I wish everything was accessible by NNTP and we had better NNTP clients. NNTP is like RSS but for forums (so, Lemmy, Reddit, or anything where you could reply to posts). Download for offline reading, read in your client, define your own formatting, sorting, filtering, your client, your rules.

If Lemmy was accessible via NNTP, I could just download all posts and comments I'm interested in and reply to them without any connection, and my replies would get synced with the server later when I connect to WiFi or something.

Probably it would be better to edit my comment, but I'll go with a reply to myself.

To all fans of RSS: there's this service called FeedBase that is essentially a RSS to NNTP gate. You add your RSS feed to that and it becomes a newsgroup on their server, and you can subscribe to it using any NNTP client. New articles appear as new posts in that newsgroup and you can post your own replies to them. So, you get RSS but with discussions or comments.

https://feedbase.org/

If you try this, let me know what RSS feeds you're reading, so we could read the articles together and have some discussion there!

P.S. This comment is not an ad. I genuinely love feedbase and use that myself.

Holy cow, that's neat as hell! Thanks for sharing!

Back in the day I was a big Usenet fan. What's the modern solution to the spam issue? At the time, folk wisdom was that the demise was being caused by spam, and that due to the nature of the protocol it was somehwhat unsolveable.

I also wonder to what extent activity pub is the barrier to offline use? For reddit, the Slide client had offline reading and iirc posting. I have been disappointed it isn't available for Lemmy. My guess has been it simply isn't a priority for the devs. Maybe eventually we will get it.

I think it would be cool if RSS got put into Lemmy clients. Example you could make a unified inbox for all accounts by automatically getting the private RSS for incoming messages for all logged in accounts. I have manually set this up a couple of times but its tedious. Completely lacks smoothness when it comes to clicking a link, replying etc. But a client could add a little finesse to fix that.

1 more...
2 more...

Why should this be at the editor level? There should be a linter that applies all these stylistic formatting changes to all files automatically. If the developer's own editing tools or personal workflow have a chance to introduce non-standard styles to the codebase, you have a deeper problem.

Why should this be at the editor level?

Because for every programming language there'll be people using text editors, but you'll never succeed in even creating code formatters for them all.

The greatness in this project is in aiming low and making things better through simple achievable goals.

1 more...
1 more...

RFC 2549 is such an important improvement over RFC 1149. Everyone should adopt the updated standard.

Can you please explain what this is?

They are humorous IETF standards published on 1 April over the years. These are specifically about implementing internet protocols using carrier pigeons instead of more traditional media like wires or optical fiber.

Look at the date of the linked RFC documents...

We should definitely be switching to the specification in RFC 6214. IPoACv6 is the latest standard.

You are absolutely correct. If your network supports IPv6, 6214 is definitely a requirement

XMPP

Why is that preferable over Matrix?

It's kinda more resposive than Matrix for me.

Yeah, my experience with Element and a Matrix.org account is that it's sluggish. However, it's been better at Beeper, so I'm uncertain whether it's intrinsic to Matrix or merely Matrix.org and/or Element's servers.

4 more...
4 more...

I wish people used email for chat more. SMTP is actually a pretty great protocol for real time communication. People think of it as this old slow protocol, but that’s mostly because the big email providers make it slow. Gmail, by default, waits ten seconds before it even tries to send your message to the recipient’s server. And even then, most of them do a ridiculous amount of processing on your messages that it usually takes several seconds from the time it receives a message to the time it shows up in your account.

There’s a project called Delta Chat that makes email look and act like a chat app. If you have a competent email service, I think it’s better than texting. It doesn’t stomp on the images you send like SMS and Facebook do, everyone has it unlike all the proprietary services, and you can run your own server for it that interacts with everyone else’s servers.

Unfortunately, Google, Microsoft, etc all block you if you try to run your own server “to protect against spam”. Really, I’m convinced that’s just anticompetitive behavior. The fewer players are allowed to enter the email market, the less competition Gmail and Outlook will have.

As much as I like ProtonMail too, unfortunately their encryption models prevents it from working with Delta Chat. I’d love to see Proton make a compatible chat app that works with their service.

I made an email service called Port87 that I’m working on making compatible with Delta chat too. I’d love to see people using email the way it was originally meant to be used, to talk to each other, without being controlled by big businesses.

The delay is there because email has no deletion support.

And a host of other shortcomings.

I'd rather we replaced email with matrix

If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.

For a regular email, the chance to undo might be fine, but for real time communication, it’s just an unnecessary road block.

Maybe if it was optional per recipient, or per conversation, or better yet, depending on the presence of a header, it might be fine. Gmail only supports all-on or all-off.

If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.

I mean yeah I wouldn't bet my life on it, but for the 99% of regular communication it's fine. That's no reason to not have it in the protocol and muck around with 10 second delays instead.

Oh, another awesome thing about email is that you can ensure that your address is always yours, even if you use an email service provider like Gmail. Any provider that supports custom domains will allow you to use your own domain for your address, then if you want to change your provider, you keep your address. So, since I own hperrin.com, I can use the address me@hperrin.com, and I know it’ll always be mine as long as I pay for that domain.

This is a much better model than anything else. Even on the fediverse, you can’t have your own address unless you run your own instance.

If your email service provider goes out of business or gets sold off (skiff.com, anyone?), as long as you’re on your own custom domain, your address is still yours.

I’m working on custom domains for Port87. It’s definitely a feature I think every email provider should offer.

Yes, I shifted to my own domain after my default ISP of 20 years decided that email was just too hard, you know? They didn't outright say it, they just started batch processing emails so that I'd get all my daily emails at around 2 am the next day. Super handy for time limited password reset emails!

A few hours reading a guide and setting up a $5/mo linode email server with SPF and dmarc, a few more hours transferring 20 years of IMAP mail from my old account to a folder, and a month or so of changing a few site contact emails over each day when they emailed something to my old account, and now I've got an email server on my own domain that is 10 times faster at sending/receiving mail than my old ISP ever was.

And now I can have amazon@mydomain.com and career@mydomain.com and random other disposable addresses so that when they are inevitably sold off for the $$$ I can just dump them and maintain a spam free inbox.

SMTP is actually a pretty great protocol for real time communication.

remembers greylisting is a common thing

Yes, I mentioned that. That’s not a protocol issue, that’s a big business controls everything issue.

1 more...
4 more...

Honestly, IRC was a very functional, easy, free, low-resource and privacy friendly chat protocol and I don't really see why it got left behind. If you wanted image/ file support that could really be implemented client and/or server side.

IRC is still in use by several open source projects, and it can be nice for quick and open public chatting for meeting people,and to ask questions.

1 more...

i wish all the big players would agree on one of the many open chat and IM protocols. it's like kindergarten where the toddlers don't want to share toys

Was it really back in 2009 that both Google and Facebook used XMPP compatible chats? Those were the days.

I was the cool guy with all the chats in one place: Pidgin.

We had the future in our hands but our corporate platform overlords made a wrong turn at Albuquerque.

1 more...
1 more...

TeX. I was able to use it during school for some beautiful type setting and formatting but nobody I work with wants to use anything other than plain text or unfortunately more commonly binary wysiwyg editor formats. It's frustrating and ugly.

One way I do TeX now is a few templates (letter, memo, etc) and circulate files in PDF.

If you must use Word to circulate files, consider Pandoc as a way to get them out.

What WYSIWYG binary formats have you been using? OpenDocument is zipped XML. OOXML is also zipped XML. RTF is plain text. Everything else is dead. RTF is too, actually.

4 more...

I'd love to see more adoption of... I2C!

Bazillions of motherboards and SBCs support I2C and many have the ability to use it via GPIO pins or even have connectors just for I2C devices (e.g. QWIIC). Yet there's very little in the way of things you can buy and plug in. It feels like such a waste!

There's all sorts of neat and useful things we could plug in and make use of if only there were software to use it. For example, cheap color sensors, nifty gesture sensors, time-of-flight sensors, light sensors, and more.

There's lmsensors which knows I2C and can magically understand zillions of temperature sensors and PWM things (e.g. fan control). We need something like that for all those cool devices and chips that speak I2C.

I2C is a bit goofy though. As a byproduct of being an undiscoverable bus you basically just have to poke random addresses and guess what you're talking to. The fact lmsensors i2c detection works as well as it does is a miracle. (Plus you get the neat issue where even the act of scanning the bus can accidentally reconfigure endpoints)

Yeah, the lack of proper discoverability on i2c truly sucks. You have to just poke random addresses and hope for the best to see if an i2c device exists on the bus. It's a great standard but I wish it would get updated with some sort of plug and play autodetection feature. Standardized device PID/VID system like USB and PCI would be acceptable or a standardized register that returns a part string. Anything other than blindly poking registers and hoping you're not accidentally overvolting the CPU or whatever because the register on your expected device overlaps with the overvolt the CPU register on the same address of a different device.

I'm curious. There was some i2c connected memory devices before. Is there some forgotten spec that allows for a flexible device lookup / logging capability. Something that acts like device tree but stays specific to the bus. It wouldn't be practical for a lot of applications but I could see it being useful for some niche stuff.

If you have an unused VGA port, you can use the DDC pins for I2C. Be sure to add ESD protection if you do this. An I2C isolator would be even better.

I2C is really not meant to be used over cables. It has a very limited common mode input voltage range and it can't handle much capacitance on the bus.

Except that in the case of VGA (and DVI, HDMI, and DisplayPort) the i2c interface is intended for use over the cable. All of those ports have a pair of i2c pins and corresponding wires in their cables. The i2c interface is used for DDC/EDID which is how the computer can identify the capabilities and specifications of the attached display. DDC even provides some rarely-used control functionality. Probably the most useful of which is being able to control the brightness of the display from software. I use the ddcci module on Linux and it lets me control my desktop monitor brightness the same way a laptop would, which is great. I have no idea why this isn't widely used.

Edit:

This i2c interface is widely used to control the lighting on modern graphics cards that have RGB lighting. We've spent a lot of time reverse engineering these chips and their i2c protocols for OpenRGB. GPU chips usually have more i2c buses than the cards have display connectors, so the RGB chip is wired to one of the unused buses. I think AMD GPUs tend to have 8 separate i2c buses but most cards only use 4 or 5 of them for display connectors. There is also an i2c interface present on RAM slots normally used for reading the SPD chip that stores RAM module specifications, timings, etc. This interface is also used for RAM modules with controllable RGB lighting.

I sincerely wish all of my messages were delivered to me by an owl holding a scroll.

SimpleX. No federated messenger is good for privacy.

But I see how SimpleX is impossible for public groups, as spam is basically unavoidable.

SimpleX was nice when I used it for small chats, but how is it with large groups? Matrix really drags ass with large rooms, even on native clients.

Never tried that. For sure it lacks spaces, threads, moderation, invite links, requests etc.

Correct me if I am wrong

2 more...

PGP/GPG. I would like to see the web of trust take off. Also I love the aesthetic for anything that's been signed, and would like to see blog posts everywhere be nested by long blocks of random symbols.

PGP has a bunch of limits (and I'm saying that as a cryptography nerd). We've learned a lot of things since the 90's and the better solutions are specialized encryption protocols like MLS / Matrix (E2EE group messaging) and running all kinds of other protocols on top.

The portable identity part of PGP can be handled by something like DID documents which works more like Keybase used to do (depending on specific implementation) where your declare a list of supported protocols with public keys and accounts under your control, so people can still achieve the same effect of using a strong cryptographic identifier to communicate with you, but with forward secrecy supported by default and much lower risk of stuff like sidechannel attacks.

3 more...

FTP

Seriously guys, let's share files the old fashioned way. Without bullshit.

I'd like to interject for a moment. What you're referring to as FTP is, in fact, smelly hot garbage.

For context, I wrote this while waiting for a migraine to pass. I was angry at my brain for ruining my morning, and I like to shit on FTP. It's fun to be hyperbolic. I don't intend for this to be an attack on you, I was just bored and decided to write this ridiculous rant to pass the time.

I must once again rant about FTP. I've no idea if you're serious about liking it or you're just taking the piss, but seeing those three letters surrounded by whitespace reminds me of all the bad things in the world.

FTP is, as I've said, smelly hot garbage, and the infrastructure built to support FTP is even worse. Why? Well, one reason is that FTP has the most idiotic networking model conceivable. To see how crazy it is, let's compare to a more sane protocol, like HTTP (for simplicity's sake, I'll do HTTP/1.1). First, you get the underlying transport protocol stuff and probably SSL. The HTTP client opens a connection from some local ephemeral port to the destination server on port 80/443/whatever and does all the normal protocol things (so syn->synack->ack and Client Hello -> Server Hello+server cert -> client kex+change cipher -> change cipher -> encrypted data). FTP does TCP too! Same same so far (minus SSL, unless you're using FTPS). Next, the HTTP client goes like this:

GET /index.html HTTP/1.1
Host: www.whatever.the.fuck
# a bunch of other headers

and you know what fucking happens here? The fucking server responds with the data and a response code on the same goddamn TCP connection. You get a big, glorious response over the nice connection you established:

200 OK
# a bunch of headers and shit

HERE'S YOUR DAMN DATA NERD

So that's nice, and the client you're using to read this used that flow (or an evolution of that flow if you're using HTTP/2 or HTTP/3). So what does FTP do? It does one of two really stupid things depending on whether you're using active or passive mode. Active mode is the default for the protocol (although not the default for most clients), so let's analyze that! First, your FTP client initiates a TCP connection to your server on port 21 (by default), and then the server just sends this:

<--- 220 Rebex FTP Server ready.

ok, that kinda came out of nowhere. You're probably using a modern client that saves you from all of the godawful footguns, so it then asks the server what it supports:

---> FEAT
<--- 211-Supported extensions:
<---  AUTH TLS;SSL;
<---  CDUP
<---  CLNT
# A whole bunch of other 4 letter acronyms. If I was writing an FTP server, I'd make it swear at the user since there are a lot of fun 4 letter words

There's some other bullshit we don't care about right now, although highlights include sending the username and password in plain text. There's also ASCII vs binary mode. WE'LL GET BACK TO THAT. :|

So then we want to do a LIST. You know what happens in active mode? Your computer opens up some random fucking TCP port. It then instructs the FTP server to CONNECT TO YOUR GODDAMN COMPUTER. Your computer is the server, and the other side is now the client. I would post a more detailed overview of the FTP commands, but most servers on the internet disable active mode because it's a goddamn liability. All of the sudden, your computer has to be internet facing with open firewall ports, and that's just a whole heap of shit.

I'm probably not blowing many minds right now because people know about this shit. I just want to mention that this is how FTP was built. The data plane and control plane are separate, and back in 19XX when this shit was invented, you could trust your fellows on ARPANET and NAT didn't exist and sure HAM radio operators here's the entire goddamn 44.0.0.0/8 block for you to do packet switched radio. A simple protocol for simple times, back before we knew what was good and what was bad.

So, active mode sucks! PASV is the future, and is the default on basically all modern clients and servers! Passive mode works exactly the same as the above, except when the client goes to LIST, the server opens some random TCP port (I've often seen something like 44000-44010) and tells the client, "hey you, connect to 1.2.3.4:44000 to get you your tasty data." Sounds great, right? Well, there's a problem that I actually touched on in my last paragraph. Back when this dogshit was first squeezed out in the 70s, everyone had a public address. There were SO MANY addresses! 4 billion addresses? We'll never use all of those! That is clearly not the case anymore. We don't have enough addresses, and now we have this wonderful thing called NAT.

Continued in part 2.

PART 2.

NAT, much like the city of Phoenix, is a monument to man's arrogance. Fuck NAT and fuck FTP. If your FTP server is listening directly on a public IP address hooked up directly to a proper router, then none of this applies. If you're anything like me, the last company I worked for (a small startup), or my current company (many many thousands of employees making software you know and may or may not hate, making many billions of dollars a year), then the majority of your servers are living in RFC1918 space. Traffic from the internet is making it to them via NAT (or NAT with extra steps, i.e. L4 load balancers).

A request comes in for $PUBLIC_IP TCP port 21 and is forwarded to your failure of a boxen at 10.0.54.187. Your FTP server is a big stupid idiot and doesn't know this. It thinks that it's king shit and has its own public IP address. Therefore, when it's deciding what ADDR:PORT it's going to tell the stupid FTP client to connect to, it just looks at one of the adapters on the box and says "oh, I'll tell this client on the internet to connect to 10.0.54.187:44007" and then I fucking cry. The FTP client is an idiot, but the IP stack on the client's home/business router is not and says "oh, that's an address living in RFC1918 space, I shouldn't send that out over the internet" and they don't get the results of their LIST.

So, how do you fix this? Well, you fix it by not using FTP. Use SFTP USE SFTP USE SFTP FOR GOD'S SAKE. But since this world is a shit fucking place, you have two options. The best option is to configure your FTP server to lie about its IP address. Rather than being honest about what a fool it is, you can tell it to send your public IP address to the client rather than the network adapter IP address. Does your public IP address change? Fuck you, you get to write a daemon that checks for that shit, rewrites your FTP server config, and HUPs the bastard (or SIGTERMs it if your server sucks and can't do a live config reload).

Let's say that you don't want to do that. Let's say you work at a small company with a small business internet plan that gives you static IPs but a shitty modem. Let's say that you don't know what FTP is or how it works and your boss told you to get it set up ASAP and it's not working (because the client over in Bendoverville Arkansas is being told to connect to a 10.x.x.x address) and it surely must be your ISP's fault. So you call up Comcast Business/AT&T/Verizon/Whoeverthefuck and you complain at their technicians for hours and hours, and eventually you get connected to a human that knows what the problem is and tells you how to configure your stupid FTP server to lie like a little sinner. The big telco megacorps don't like that. They don't want to waste all those hours, and they don't want to hire too many people who can figure that shit out because it's expensive. You wanna know what those fucking asshole companies did?

Continued in part 3.

PART 3.
They made their STUPID MODEMS FUCK WITH THE FTP PACKETS. I have personally experienced this with Comcast Business. The stupid piece of shit DOCSIS modem they provide intercepts the FTP packet from your server saying "oh, connect to this address: x.x.x.x:44010" and they rewrite the fucking address to the public IP. There is no way to turn just this horse piss off. Now, for average business customers, this probably saved Comcast a bunch of money in support calls. However, if you're using the so-called bridge mode on that degenerate piece of shit-wrapped-silicon (where rather than allowing the modem to give you a DHCP address, you just configure your system to have one of the addresses in the /29 space and the modem detects that and says oh okay don't NAT traffic when it's going to this address, just rewrite the MAC and shunt it over the right interface), then something funny happens. The modem still rewrites the contents of the packet, but it uses the wrong fucking IP address! Because the public IP that your server is running on is no longer available to the modem, the modem just chooses another fucking address. Then, the client tries to connect to 1.2.3.5 instead of 1.2.3.4 where your server is listening, the modem says "hey I'm 1.2.3.5 and you can fuck off, I'm dropping your SYN for port 44010", and I get an angry call from the client asking why they can't download their files using this worthless protocol. I remember having a conversation like this:

Me: "Just use SFTP on port 22!"
Client: "No! FTP is faster/more secure/good enough for my grandfather good enough for me/corporate won't allow port 22."
Me: "Comcast is fucking me right now. What if we lied and served SFTP over port 21?"
# we try it
Client: "It's not working! I can't even connect!"

I couldn't connect either. I couldn't connect to anything. Trying to do SFTP over port 21 caused the stupid fucking modem to CRASH.

Are you starting to see what the problem is? It's like Microsoft preserving bugs in Windows APIs so that shitty software doesn't break, and then they end up doing crazy gymnastics to accomodate old shit like the Windows 8 -> Windows 10 thing where they couldn't use "Windows 9" because that would confuse software into thinking it was running "Windows 95" or "Windows 98". FTP has some bugfuck crazy design decisions that we've collectively decided to just "work around," and it leads to fucking gymnastics.

Speaking of bugfuck crazy design decisions, FTP's default file transfer mode intentionally mangles data!

Continued in part 4.

PART 4.

You expect a file transfer program to reliably and faithfully transfer your files, byte-for-byte, from one system to another. FTP spits in your face and shits on your chest. You know how Linux uses LF (i.e. \n) for newlines and Windows uses CRLF (i.e. \r\n) for newlines? Pretty annoying, right? Well, FTP's ASCII mode will automatically rip off those \r characters for you! Sounds pretty sweet, right? Fuck no it's not. All of the sudden, your file checksums have changed. If you pass the same file back to a Windows user with a different and more sane file transfer system, then they get a broken file because FTP didn't mind its own fucking business. If you have a CRLF file and need an LF file, just explicitly use dos2unix. Wanna go the other way? unix2dos. The tool has been around since 1989 and it's great.

Now, what if you're not transferring text, but instead are transferring a picture of a cute cat? What if your binary data happens to have 0x0D0x0A somewhere in it? Well, ASCII mode will happily translate that to 0x0A and fucking ruin your adorable cat picture that you were going to share with your depressed significant other in an attempt to cheer them up. Now the ruined JPEG will remind them of the futility of their situation and they'll slide even deeper into cold emptiness. Thanks, FTP.

You can tell your client to use binary mode and this problem goes away! In fact, modern clients do this automatically so your SO gets to see the adorable fuzzy cat picture. But let's just stop and think about this. Why use a protocol that is dangerous by default? Why use a protocol that supports no form of security (unless you're using fucking godawful FTPS or FTP over SSH)? Why use a protocol that is so broken by design that small business hardware has been designed to try to unfuck it? Is it faster? I mean, not really. SFTP has encryption/decryption overhead, but your CPU is so fast that you'd need to transfer at 25+ Gb/s to notice it. Is it easier? Fuck no it's not easier, look at all of the stupid footguns I've just mentioned. Is it simpler? The line protocol is simple, but so is HTTP, and HTTP has a much simpler control flow path (merging the data and control planes is objectively the right thing to do in this context). And shit, you want a simple protocol for cases where you don't have a lot of CPU power? Use fucking TFTP. It's dogshit, but it was intentionally designed to be dogshit so that a fucking potato could receive data with it.

There is no task that is currently being done with FTP that couldn't be done more easily, more securely, and more quickly with some other protocol (like fucking SSH and SFTP, which is now built into fucking Windows for god's sake). Fuck FTP.

Have you considered publishing that as a book? (/s)

You are insane... in a good way. I love it. Fantastic read and I had to chuckle a few times.

I'm glad that my grumpy migraine ramblings brought someone some joy!

I read the first two and kinda gave up my dude. Here's my deal. I get that it's bad under the hood. What else can I use that lets me and a friend pretend we just have folders in each other's computers with just a port forward, IP, and a password?

16 more...
16 more...
16 more...
16 more...
16 more...

In that case, I'd like to chime in and add NFS to this list. The often overlooked jewel of the glorious past days. /j

So like... If I had a game installed on your computer, my computer could treat that game as if it's local and load files over the Internet like it's just reading my disk?

That is cool as fuck.

16 more...

I used to have an open public SIP address that would ring a home phone, complete with a retro answering machine, but nobody uses SIP...

[…] nobody uses SIP…

Say what?

In my part of the world signaling for literally every phone call, be it mobile or fixed, traverses networks and operators using SIP.

Yeah, I mean nobody uses SIP as an open protocol with email-like addresses. You could call me with an unregistered softphone. It would have been way cooler if I had any use for it outside of like two other nerd friends of mine who run personal Asterisk servers.

finger cyclohexane@lemmy.ml

Uds, shm, fuse for ipc. Ini for configs.

what language am i reading?

Unix domain sockets, shared memory (classic and/or over anonymous file descriptors), file system in userspace, the (ms) ini format.

Was going to sleep when i wrote that.

Idk, I'm fine with Yaml/json + json schema. A JSON SCHEMA IS A REQUIREMENT for a good config in my opinion.

IPFS I'm really glad things like nerdctl and guix support it, but I wish more things too advantage of the p2p filesystem.

Petals.Dev and hivemind ml P2P AI inference and training seem like the only true viable options to make foundational models that are owned soley by authoritarian government s and megacorps.

Matrix for federated general real time communication. (Not justs chat, video, images, but just data, with third room being on the cooler demos for what is possible)

Activity Pub for asynchronous communication between servers. The socialmedia aspect is obviously the focus and the most mature, but I'm also excited for things like Forgejo (Codeberg.org) and Gitlab's support.

I am also excited for QUIC for increased privacy of metadata and reduction of network trips.

The problem with IPFS is that kubo sucks. I used it for a while and it is always burning CPU doing nothing, failing to publish anything to the DHT and fetching files is slow. GC collects files that are supposed to be "pinned" by existing in MFS and so many other bugs all of the time.

I would love to see a new server take off that was built with quality in mind.

I think the core protocol is pretty good (although I think they should have baked in encryption) but even the mid-level protocols like UnixFS and DAG whatever are mired in constant change for no real reason as the developers jump on new fads.

Slow and requires additional tooling to run normally. Just not a lot of development on the core pieces tbh. Wasm support for example could make deployments way simpler (implement an ipfs proxy in any browser or server that supports wasm) but the ticket for that kind of died off. There is a typescript implementation, helia, that I haven't checked out yet.

We are honestly kind of in a decentralization winter again, with ActivityPub being one of the few survivors gaining traction from what it seems. OpenSource luckily doesn't just up and die like that, so I still have hope for some next spring.

I've been playing with MQTT on meshtastic. I really hope LoRa and meshtastic continue to grow.

The more they grow, the busier the spectrum will be. I really hope it doesn't grow too much.

Just enough to grow the network so we don't need mqtt.

I wish the protocol used by Hotline Client took off, it was basically Discord in the 90s with its support for announcement/news posts and file sharing

Also KDX. I was too young to use that, but tried and it's cool. Sadly even FOSS clients are all dead and don't build anymore. (I think I had limited success with patching one called Fidelio to build, but that was a few years ago and I can't find any traces of that attempt.)

I’m really into CloudEvents because I love event-driven systems, and since events can come from, or be consumed by, so many different services, having a robust spec is super duper useful.

So what problem is this solving? What are some event-driven systems that need to interoperate? Seems like even if you have a common encapsulation method, you still need code to understand and deal with the message body. Just seems like an extra layer around a JSON blob.

matrix, or at least interop standards for online communications. It's such bullshit that you make a shitty chat app, and just because it's free and relatively featured, become the single existing monopoly of chat applications.

Like idgaf as long as i can host a bridge between discord and matrix or some shit, and you technically can, but it's a right pain in the ass to do so.

Yup. Way too many people using different chat apps. I've bridged most of them but still annoying.

For business email is thankfully still pretty common. But some of them try to push you to one of the Facebook messengers.

I want an open widely used chat app ASAP.

and even with email, that's still open. So not a huge concern, or at the very least standardized enough to make it easily interop.

But yeah, i would greatly appreciate anything that isn't fucking discord.

OpenTelemetry and in particular I wish more protocols had Traceparent propagation support and more software had support for sending spans and traces to an OTLP endpoint to construct a full picture of everything that is going on in a distributed system.

RCS compatibility between iOS and Android operating systems

Google has used RCS as their latest attempt at entering the messenger market. I really don't see why anybody else would want to adopt it under these circumstances. I mean Samsung did but Samsung is playing their own little paranoid game with Google, they don't really give a crap about RCS.

Basically Google killed RCS. They will never be able to make anybody adopt it against their will in the EU, people will stick to what messenger services they're already using. If they ever attempt to turn it on by default in their own app it will turn into a regulatory issue so fast their head will spin.

I actually feel the opposite.

Rcs was designed from the ground up to be handled by the carrier in clear text like sms, it doesn’t incorporate encryption in any way and doesn’t do much at all to address the untrustworthy nature of carriers and law enforcement nowadays.

It’s like those two protocols started developing at the same time and only google kept extending rcs to keep some degree of feature parity with imessage.

If we had to ditch iMessage it ought to be for some third type, not for questionably secure rcs and what new bubble color can be used to indicate that someone’s using an unencrypted rcs server?

I want us to stop using communication protocols that are tied to our connectivity providers. Let alone tied to a specific piece of hardware (SIM card).

"Telephone providers" should be just another ISP. And whatever I do over the network shouldn't care if it is running on a mobile network or a landline fibre.

While we are at it let's fuck off with this SIM shit. You don't get to run code on my device. Give me an authentication key that I can use to connect to your network and then just transfer my packets. My device runs my code.

definitely some alternative internet mesh routing standart, just imagine if every device with wifi or ethernet could just extend the network without relying on an isp, yeah they could still serve as a fast backbone, but they just wouldn't be needed and no disaster could really ever disrupt the whole internet again

honestly: activity pub, matrix, xmpp, markdown and soo many more probably. All of these would be able to solve our walled gardens problem, but the apps with a basically monopoly don't have much of an incentivw to implement them

There are a bunch of message broker services out there, and having a consistent set of common keys along with a documented process for transforming events to/from different systems means that this kind of data can move through different systems without getting mangled. It does have a spec for JSON, so it can be considered just a standardized JSON blob with transformation rules. But it also has a protobuf spec, specs for MQTT, NATS, HTTP, Avro, etc. It’s a common language for all these systems.