Are there any things in Linux that need to be started over from scratch?

sepulcher@lemmy.ca to Linux@lemmy.ml – 162 points –

I'm curious how software can be created and evolve over time. I'm afraid that at some point, we'll realize there are issues with the software we're using that can only be remedied by massive changes or a complete rewrite.

Are there any instances of this happening? Where something is designed with a flaw that doesn't get realized until much later, necessitating scrapping the whole thing and starting from scratch?

265

Happens all the time on Linux. The current instance would be the shift from X11 to Wayland.

The first thing I noticed was when the audio system switched from OSS to ALSA.

And then ALSA to all those barely functional audio daemons to PulseAudio, and then again to PipeWire. That sure one took a few tries to figure out right.

And the strangest thing about that is that neither PulseAudio nor Pipewire are replacing anything. ALSA and PulseAudio are still there while I handle my audio through Pipewire.

How is PulseAudio still there? I mean, sure the protocol is still there, but it’s handled by pipewire-pulse on most systems nowadays (KDE specifically requires PipeWire).

Also, PulseAudio was never designed to replace ALSA, it’s sitting on top of ALSA to abstract some complexity from the programs, that would arise if they were to use ALSA directly.

Pulse itself is not there but its functionality is (and they even preserved its interface and pactl). PipeWire is a superset of audio features from Pulse and Jack combined with video.

For anyone wondering: Alsa does sound card detection and basic IO at the kernel level, Pulse takes ALSA devices and does audio mixing at the user/system level. Pipe does what Pulse does but more and even includes video devices

there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

I think this was the main reason for the Wayland project. So many issues with Xorg that it made more sense to start over, instead of trying to fix it in Xorg.

And as I've understood and read about it, Wayland had been a near 10 years mess that ended up with a product as bad or perhaps worse than xorg.

Not trying to rain on either parade, but x is like the Hubble telescope if we added new upgrades to it every 2 months. Its way past its end of life, doing things it was never designed for.

Wayland seems... To be missing direction?

I do not want to fight and say you misunderstood. Let’s just say you have been very influenced by one perspective.

Wayland has taken a while to fully flesh out. Part of that has been delay by the original designers not wanting to compromise their vision. Most of it is just the time it takes to replace something mature ( X11 is 40 years old ). A lot of what feels like Wayland problems actually stem from applications not migrating yet.

While there are things yet to do, the design of Wayland is proving itself to be better fundamentally. There are already things Wayland can do that X11 likely never will ( like HDR ). Wayland is significantly more secure.

At this point, Wayland is either good enough or even superior for many people. It does not yet work perfectly for NVIDIA users which has more to do with NVIDIA’s choices than Wayland. Thankfully, it seems the biggest issues have been addressed and will come together around May.

The desktop environments and toolkits used in the most popular distros default to Wayland anlready and will be Wayland only soon. Pretty much all the second tier desktop environments have plans to get to Wayland.

We will exit 2024 with almost all distros using Wayland and the majority of users enjoying Wayland without issue.

X11 is going to be around for a long time but, on Linux, almost nobody will run it directly by 2026.

Wayland is hardly the Hubble.

Well, as I said, it's what I read. If it's better than that, great. Thanks for correcting me

Also, X is Hubble, not Wayland :)

I've been using Wayland on plasma 5 for a year or so now, and it looks like the recent Nvidia driver has merged, so it should be getting even better any minute now.

I've used it for streaming on Linux with pipewire, overall no complaints.

Wayland is the default for GNOME and KDE now, meaning before long it will become the default for the majority of all Linux users. And in addition, Xfce, Cinnamon and LXQt are also going to support it.

according to kagiGPT..
~~i have determined that wayland is the successor and technically minimal:
*Yes, it is possible to run simple GUI programs without a full desktop environment or window manager. According to the information in the memory:

You can run GUI programs with just an X server and the necessary libraries (such as QT or GTK), without needing a window manager or desktop environment installed. [1][2]

The X server handles the basic graphical functionality, like placing windows and handling events, while the window manager is responsible for managing the appearance and behavior of windows. [3][4]

Some users prefer this approach to avoid running a full desktop environment when they only need to launch a few GUI applications. [5][6]

However, the practical experience may not be as smooth as having a full desktop environment, as you may need to manually configure the environment for each GUI program. [7][8]*~~

however... firefox will not run without the full wayland compositor.

correction:

  1. Wayland is not a display server like X11, but rather a protocol that describes how applications communicate with a compositor directly. [1]

  2. Display servers using the Wayland protocol are called compositors, as they combine the roles of the X window manager, compositing manager, and display server. [2]

  3. A Wayland compositor combines the roles of the X window manager, compositing manager, and display server. Most major desktops support Wayland compositors. [3]

There is some Rust code that needs to be rewritten in C.

Strange. I’m not exactly keeping track. But isn’t the current going in just the opposite direction? Seems like tons of utilities are being rewritten in Rust to avoid memory safety bugs

You got it right, the person you replied to made a joke.

The more the code is used, the faster it ought to be. A function for an OS kernel shouldn't be written in Python, but a calculator doesn't need to be written in assembly, that kind of thing.

I can't really speak for Rust myself but to explain the comment, the performance gains of a language closer to assembly can be worth the headache of dealing with unsafe and harder to debug languages.

Linux, for instance, uses some assembly for the parts of it that need to be blazing fast. Confirming assembly code as bug-free, no leaks, all that, is just worth the performance sometimes.

But yeah I dunno in what cases rust is faster than C/C++.

C/C++ isn’t really faster than Rust. That’s the attraction of Rust; safety AND speed.

Of course it also depends on the job.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/box-plot-summary-charts.html

C/C++ isn’t

You're talking about two languages, one is C, the other is C++. C++ is not a superset of C.

Yes thank you. But my statement remains true nevertheless.

Rust is faster than C. Iterators and mutable noalias can be optimized better. There's still FORTRAN code in use because it's noalias and therefore faster

But yeah I dunno in what cases rust is faster than C/C++.

First of all C and C++ are very different, C is faster than C++. Rust is not intrinsically faster than C in the same way that C is faster than C++, however there's a huge difference, safety.

Imagine the following C function:

void do_something(Person* person);

Are you sure that you can pass NULL? Or that it won't delete your object? Or delete later? Or anything, you need to know what the function does to be sure and/or perform lots of tests, e.g. the proper use of that function might be something like:

if( person ) {
  person_uses++;
  do_something(person);
}

...

if( --person_uses == 0 )
  free( person )

That's a lot more calls than just calling the function, but it's also a lot more secure.

In C++ this is somewhat solved by using smart pointers, e.g.

void do_something(std::unique_ptr person);
void something_else(std::shared_ptr person);

That's a lot more secure and readable, but also a lot slower. Rust achieves the C++ level of security and readability using only the equivalent of a single C call by performing pre-compile analysis and making the assembly both fast and secure.

Can the same thing be done on C? Absolutely, you could use macros instead of ifs and counters and have a very fast and safe code but not easy to read at all. The thing is Rust makes it easy to write fast and safe code, C is faster but safe C is slower, and since you always want safe code Rust ends up being faster for most applications.

Agree, call me unreasonable or whatever but I just don't like Rust nor the community behind it. Stop trying to reinvent the wheel! Rust makes everything complicated.

On the other hand... Zig 😘

Also look into Hare, Odin and Vale. They're all neat ideas.

Starting anything from scratch is a huge risk these days. At best you'll have something like the python 2 -> 3 rewrite overhaul (leaving scraps of legacy code all over the place), at worst you'll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.

  1. Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.

  2. Buy 99% of the market and declare yourself king (cough cough chromium).

Python 3 wasn't a rewrite, it just broke compatibility with Python 2.

In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn't won't get you far.

The entire thing. It needs to be completely rewritten in rust, complete with unit tests and Miri in CI, and converted to a high performance microkernel. Everything evolves into a crab /s

I laughed waay too hard at this.

A PM said something similar earlier this week during a performance meeting: "I heard rust was fast. Maybe we should rewrite the software in that?"

They must have been a former Duke Nukem Forever project manager

Linux does this all the time.

ALSA -> Pulse -> Pipewire

Xorg -> Wayland

GNOME 2 -> GNOME 3

Every window manager, compositor, and DE

GIMP 2 -> GIMP 3

SysV init -> SystemD

OpenSSL -> BoringSSL

Twenty different kinds of package manager

Many shifts in popular software

BoringSSL is not a drop-in replacement for openssl though:

BoringSSL is a fork of OpenSSL that is designed to meet Google's needs.

Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don't recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.

Aren't different kinds of package managers required due to the different stability requirements of a distro?

We haven't rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it's nftables.

I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.

Damn, you're old. iptables came out in 1998. That's what I learned in (and I still don't fully understand it).

UFW → nftables/iptables. Never worry about chains again

I was just thinking that iptables lasted a good 20 years. Over twice that of ipchains. Was it good enough or did it just have too much inertia?

Nf is probably a welcome improvement in any case.

It's actually a classic programmer move to start over again. I've read the book "Clean Code" and it talks about a little bit.

Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it's supposed to replace. While starting over can be tempting, refactoring is in my opinion better.

If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you'll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time ....

However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.

Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say "all of this is a mess, let's just delete it all and start from scratch" as though that was some kind of bold/smart move.

But I now understand that it's the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.

Chesterton's Fence is the relevant analogy: "you should never destroy a fence until you understand why it's there in the first place."

I'd counter that with monolithic, legacy apps without any testing trying to refactor can be a real pain.

I much prefer starting from scratch, while trying to avoid past mistakes and still maintaining the old app until new up is ready. Then management starts managing and new app becomes old app. Rinse and repeat.

I made a thing.

The difference between the idiot and the expert, is the expert knows why the fences are there, and can do the rewrite without having to relearn lessons. But if you're supporting a package you didn't originally write, a rewrite is much harder.

Which is something I always try to explain to juniors: writing code is cool, but for your sake learn how to READ code.

Not just understanding what it does, but what was it all meant to do. Even reading your own code is a skill that needs some focus.

Side note: I hate it to my core when people copy code mindlessly. Sometimes it's not even a bug, or a performance issue, but something utterly stupid and much harder to read. But because they didn't understand it, and didn't even try, they just copy-pasted it and went on. Ugh.

Side note: I hate it to my core when people copy code mindlessly

Get ready for the world of AI code assist 😬

“you should never destroy a fence until you understand why it’s there in the first place.”

I like that; really makes me think about my time in building-games.

1 more...

GUI toolkits like Qt and Gtk. I can't tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I'm unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.

Idk man, I've used a lot of UI toolkits, and I don't really see anything wrong with GTK (though they do basically rewrite it from scratch every few years it seems...)

The only thing that comes to mind is the React-ish world of UI systems, where model-view-controller patterns are more obvious to use. I.e. a concept of state where the UI automatically re-renders based on the data backing it

But generally, GTK is a joy, and imo the world of HTML has long been trying to catch up to it. It's only kinda recently that we got flexbox, and that was always how GTK layouts were. The tooling, design guidelines, and visual editors have been great for a long time

I've really fallen in love with the Iced framework lately. It just clicks.

A modified version of it is what System76 is using for the new COSMIC DE

Desktop apps nowadays are mostly written in HTML with Electron anyway.

Which - in my considered opinion - makes them so much worse.

Is it because writing native UI on all current systems I'm aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?

And/or is it because it allows (perceived) lower-cost "web developers" to be tasked with "native" client UI?

Probably mainly a matter of saving costs, you get a web interface and a standalone app from one codebase.

Are you aware of macOS? Because it is still built with the same UI tools that you mention.

Newer toolkits all seem to be going immediate mode. Which I kind of hate as an idea personally.

er, do you have an example. This is not a trend I was aware of

lol that someone already said Wayland.

Your welcome. :)

His welcome?

That'll teach me to type when I'm mad. "You're". There a go.

Actually, it won't teach me. Wayland's mere existence will bother me till the day I die, I'm sure. Especially once it's working well enough for me to have to adopt it. The resentment will grow.

Alsa > Pulseaudio > Pipewire

About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)

My session scripts after a deep dive. Seriously, startxfce4 has workarounds from the 80ies and software rot affected formatting already.

Turnstile instead elogind (which is bound to systemd releases)

mingetty, because who uses a modem nowadays?

Pulseaudio doesn't replace ALSA. Pulseaudio replaces esd and aRts

Linux could use a rewrite of all things related to audio from kernel to x / Wayland based audio apps.

About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)

I use handlr-regex, is it bad? It was the only thing I found that I could use to open certain links on certain web applications (like android does), using exo-open all links just opened on the web browser instead.

Be careful what you wish for. I’ve been part of some rewrites that turned out worse than the original in every way. Not even code quality was improved.

In corporations, we call that job security.

Just rewriting the same thing in different ways for little gain except to say we did it

Funnily enough the current one is actually the one where we’ve made the biggest delta and it’s been worthwhile in every way. When I joined the oldest part of the platform was 90s .net and MSSQL. This summer we’re turning the last bits off.

Some form of stable, modernized bluetooth stack would be nice. Every other bluetooth update breaks at least one of my devices.

I realize that's not exactly what you asked for but Pipewire had been incredibly stable for me. Difference between the absolute nightmare of using BT devices with alsa and super smooth experience in pipewire is night and day.

There are many instances like that. Systemd vs system V init, x vs Wayland, ed vs vim, Tex vs latex vs lyx vs context, OpenOffice vs libreoffice.

Usually someone identifies a problem or a new way of doing things… then a lot of people adapt and some people don’t. Sometimes the new improvement is worse, sometimes it inspires a revival of the old system for the better…

It’s almost never catastrophic for anyone involved.

Some of those are not rewrites but extensions/forks

I’d say only open/libreoffice fits that.

Edit: maybe Tex/latex/lyx too, but context is not.

LaTeX and ConTeXt are both macros for TeX. LyX is a graphical editor which outputs LaTeX.

Yes… I’d classify context as a reboot of latex.

I would say the whole set of C based assumptions underlying most modern software, specifically errors being just an integer constant that is translated into a text so it has no details about the operation tried (who tried to do what to which object and why did that fail).

You mean 0 indicating success and any other value indicating some arbitrary meaning? I don't see any problem with that.

Passing around extra error handling info for the worst case isn't free, and the worst case doesn't happen 99.999% of the time. No reason to spend extra cycles and memory hurting performance just to make debugging easier. That's what debug/instrumented builds are for.

Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time.

The case "I want to know why this error happened" is basically 100% of the time when an error actually happens.

And the case of "Permission denied" or similar useless nonsense without any details costing me hours of my life in debugging time that wouldn't be necessary if it just told me permission for who to do what to which object happens quite regularly.

"0.001% of the time, I wanna know every time 👉😎👉"

Yeah, I get that. But are we talking about during development (which is why we're choosing between C and something else)? In that case, you should be running instrumented builds, or with debug functionality enabled. I agree that most programs just fail and don't tell you how to go about enabling debug info or anything, and that could be improved.

For the "Permission Denied" example, I also assume we're making system calls and having them fail? In that case it seems straight forward: the user you're running as can't access the resource you were actively trying to access. But if we're talking about some random log file just saying "Error: permission denied" and leaving you nothing to go on, that's on the program dumping the error to produce more useful information.

In general, you often don't want to leak more info than just Worked or Didn't Work for security reasons. Or a mix of security/performance reasons (possible DOS attacks).

During development is just about the only time when that doesn't matter because you have direct access to the source code to figure out which function failed exactly. As a sysadmin I don't have the luxury of reproducing every issue with a debug build with some debugger running and/or print statements added to figure out where exactly that value originally came from. I really need to know why it failed the first time around.

Yeah, so it sounds like your complaint is actually with application not propagating relevant error handling information to where it's most convenient for you to read it. Linux is not at fault in your example, because as you said, it returns all the information needed to fix the issue to the one who developed the code, and then they just dropped the ball.

Maybe there's a flag you can set to dump those kinds of errors to a log? But even then, some apps use the fail case as part of normal operation (try to open a file, if we can't, do this other thing). You wouldn't actually want to know about every single failure, just the ones that the application considers fatal.

As long as you're running on a turing complete machine, it's on the app itself to sufficiently document what qualifies as an error and why it happened.

The whole point of my complaint is that shitty C conventions produce shitty error messages. If I could rely on the programmer to work around those stupid conventions every time by actually checking the error and then enriching it with all relevant information I would have no complaints.

As sysadmin you should know about strace

I know about strace, strace still requires me to reproduce the issue and then to look at backtraces if nobody bothered to include any detail in the error.

Somehow (lack of) backtrace and details in error is "C based assumption"

Ugh, I do not miss C...

Errors and return values are, and should be, different things. Almost every other language figured this out and handles it better than C.

It's more of an ABI thing though, C just doesn't have error handling.

And if you do exception handling wrong in most other languages, you hamstring your performance.

The unofficial C motto "Make it fast, who gives a shit about correctness"

Errors and return values are, and should be, different things.

That's why errno and return value are different things.

Assembly doesn't have concept of objects.

It does very much have the concept of objects as in subject, verb, object of operations implemented in assembly.

As in who (user foo) tried to do what (open/read/write/delete/...) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,...).

implemented in assembly.

Indeed. Assembly is(can be) used to implement them.

As in who (user foo) tried to do what (open/read/write/delete/...) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,...).

Kernel implements it in software(except memory mappings, it is implemented in MMU). There are no sockets, files and namespaces in ISA.

You were the one who brought up assembly.

And stop acting like you don't know what I am talking about. Syscalls implement operations that are called by someone who has certain permissions and operate on various kinds of objects. Nobody who wants to debug why that call returned "Permission denied" or "File does not exist" without any detail cares that there is hardware several layers of abstraction deeper down that doesn't know anything about those concepts. Nothing in the hardware forces people to make APIs with bad error reporting.

And why "Permission denied" is bad reporting?

Because if a program dies and just prints strerror(errno) it just gives me "Permission denied" without any detail on which operation had permissions denied to do what. So basically I have not enough information to fix the issue or in many cases even to reproduce it.

It may just not print anything at all. This is logging issue, not "C based assumption". I wouldn't be surprised if you will call "403 Forbidden" a "C based assumtion" too.

But since we are talking about local program, competent sysadmin can strace program. It will print arguments and error codes.

8 more...

There's already a lot of people rewriting stuff in Rust and Zig.

What are the advantages of Zig? I've seen lots of people talking about it, but I'm not sure I understand what it supposedly does better.

The goal of the zig language is to allow people to write optimal software in a simple and explicit language.

It's advantage over c is that they improved some features to make things easier to read and write. For example, arrays have a length and don't decay to pointers, defer, no preprocessor macros, no makefile, first class testing support, first class error handling, type inference, large standard library. I have found zig far easier to learn than c, (dispite the fact that zig is still evolving and there are less learning resources than c)

It's advantage over rust is that it's simpler. Ive never played around with rust, but people have said that the language is more complex than zig. Here's an article the zig people wrote about this: https://ziglang.org/learn/why_zig_rust_d_cpp/

Tiny learning curve, easy to refactor existing projects

Cough, wayland, cough (X is just old and wayland is better)

Alt text: Thomas Jefferson thought that every law and every constitution should be torn down and rewritten from scratch every nineteen years--which means X is overdue.

Your alt text doesn't describe what is mentioned in the image though?

Not too relevant for desktop users but NFS.

No way people are actually setting it up with Kerberos Auth

100% this

We need a networked file system with real authentication and network encryption that's trivial to set up and that is performant and that preserves unix-ness of the filesystem, meaning nothing weird like smb, so you can just use it as you would a local filesystem.

The OpenSSH of network filesystems basically.

I dont know if this even makes sense but damn if bluetooth/ audio could get to a point of "It just works".

What's your latest disfavor?

Mine is the priorisation of devices. If someone turns on the flatshare BT box and I'm listening to Death Metal over my headphones, suddenly everyone except me is listening to Death Metal.

Not to mention bluez aggressive conne ts to devices. It would be nice if my laptop in the other room didn't interrupt my phones connection to my earbuds.

Then again, we also have wired for a reason. Hate all you want but it works and is predicable

Just being.. crappy?

Not connecting automatically. Bad quality. Some glitchy artifacts. It gets horrible The only work around I've found is stupid but running apt reinstall --purge bluez gnome-bluetooth and it works fine. So annoying but I have to do this almost every day.

Reinstalling should change nothing. If its getting corrupted check your drive and Ram.

I don't know why this works, but if im having issues, i do this, and it fixes all of them across the board. Even just restarting the service is not as effective as this. That some times works, sometimes doesn't.

I'm confident its not a drive or ram issue. Its a blue tooth issue/ audio. But I also can't explain why it is so consistent.

Have you checked the logs?

Yep. Nothing sus. I also don't have the time to do a deep dive. I need to work. It might be this chip. It might be my bluetooth headset (but I have issues with my mouse and keyboard too). I don't have time to figure it out, so I just keep this on a copy paste ready terminal and if I have issue, I run the command and I'm good to go.

It's been a while (few years actually) since I even tried, but bluetooth headsets just won't play nicely. You either get the audio quality from a bottom of the barrel or somewhat decent quality without microphone. And the different protocol/whatever isn't selected automatically, headset randomly disconnects and nothing really works like it does with my cellphone/windows-machines.

YMMV, but that's been my experience with my headsets. I've understood that there's some propietary stuff going on with audio codecs, but it's just so frustrating.

It does for me. What issue are you having?

My most recent issue with Bluez is that it's been very inconsistent about letting me disable auto-switching to HSP/HFP (headset mode) when joining any sort of call.

It's working now, but it feels like every few months I need to try a different solution.

2 more...

Bluetooth in general is just a mess and it's sad that there's no cross-platform sdk written in C for using it.

2 more...

Join the hive mind. Rust is life.

My only two concerns are one, Rust is controlled by a single entity, and two, it is young enough we don't know about all of its flaws.

Third concern: dependencies.

I installed a fairly small rust program recently (post-XZ drama), and was a bit concerned when it pulled in literally hundreds of crates as dependencies. And I wasn't planning on evaluating all of them to see if they were secure/trustworthy - who knows if one of them had a backdoor like XZ? Rust can claim to be as secure as Fort Xnox, but it means nothing if you have hundreds of randoms constantly going in and out of the building, and we don't know who's doing the auditing and holding them accountable.

In reality this happens all the time. When you develop a codebase it's based on your understanding of the problem. Over time you gain new insights into the environment in which that problem exists and you reach a point where you are bending over backwards to implement a fix when you decide to start again.

It's tricky because if you start too early with the rewrite, you don't have a full understanding, start too late and you don't have enough arms and legs to satisfy the customers who are wanting bugs fixed in the current system while you are building the next one.

.. or you hire a new person who knows everything and wants to rewrite it all in BASIC, or some other random language ..

Are there any things in Linux that need to be started over from scratch?

Yes, Linux itself! (ie the kernel). It would've been awesome if Linux were a microkernel, there's so many advantages to it like security, modularity and resilience.

I wish L4 had taken off.

It still might. Redox is a microkernel based around L4 architecture, but not formally verified.

Not really software but, personally I think the FHS could do with replacing. It feels like its got a lot of historical baggage tacked on that it could really do with shedding.

Fault handling system?

Filesystem Hierarchy Standard

/bin, /dev, /home and all that stuff

Would be a crazy expensive migration though

Definitely. As nice as it would be, I don't think it will significantly change any time soon, for several reasons. Not least of which is because several programs would likely just flatly refuse to implement such a change, judging by some of them refusing to even consider patches to implement the XDG Base Directory Specification.

So much of that is PDP-11 baggage or derived from it.

Or more generally Very Small Disk baggage.

What's wrong with it?

$PATH shouldn't even be a thing, as today disk space is cheap so there is no need to scatter binaries all over the place.

Historically, /usr was created so that you could mount a new disk here and have more binaries installed on your system when the disk with /bin was full.

And there are just so many other stuff like that which doesn't make sense anymore (/var/tmp comes to mind, /opt, /home which was supposed to be /usr but name was already taken, etc ...).

How would virtual environment software, like conda, work without $PATH?

Today's software would probably break, but my point is that $PATH is a relic from ancient times that solved a problem we don't have anymore.

$PATH shouldn’t even be a thing, as today disk space is cheap so there is no need to scatter binaries all over the place.

$PATH is very useful for wrapper scripts, without it there wouldn't be an easy way to for fix the mess that steam does in my homedir that places a bunch of useless dotfiles in it. The trick is simply have a script with the same name as the steam binary in a location that is first in $PATH therefore it will always be called first before steam can start and murder my home again.

About /var/tmp, I just have it symlinked to /tmp, technically /var/tmp still has a reason to exist, as that location is use for temporary files that you don't want to lose on power loss, but I actually went over the list possible issues and iirc it was mostly some cache files of vim.

EDIT: Also today several distros symlink /bin and /sbin to /usr/bin.

You missed my point. The reason $PATH exists in the first place is because binaries were too large to fit on a single disk, so they were scattered around multiple partitions (/bin, /sbin, /usr/bin, etc...). Now, all your binaries can easily fit on a single partition (weirdly enough, /usr/bin was chosen as the "best candidate" for it), but we still have all the other locations, symlinked there. It just makes no sense.

As for the override mechanism you mention, there are much better tools nowadays to do that (overlayfs for example).

This is what plan9 does for example. There is no need for $PATH because all binaries are in /bin anyways. And to override a binary, you simply "mount" it over the existing one in place.

but we still have all the other locations, symlinked there. It just makes no sense.

Because a lot of shit would break if that wasn't the case, starting with every shell script that has the typical #!/bin/sh or #!/bin/bash shebang.

This is what plan9 does for example. There is no need for $PATH because all binaries are in /bin anyways. And to override a binary, you simply “mount” it over the existing one in place.

Does that need elevated privileges? Because with PATH what you do is export this environment variable with the order you want, like this:

export PATH="$HOME/.local/bin:$PATH" (And this location is part of the xdg base dir spec btw).

This means that my home bin directory will always be first in PATH, and for the steam example it means that I don't have to worry about having to add/change the script every time the system updates, etc.

Also what do you mean by mounting a binary over? I cannot replace the steam binary in this example. What I'm doing is using a wrapper script that launches steam on a different location instead (and also passes some flags that makes steam launch silently).

By mounting the binary over, I mean something like a bind mount. But in your case of a wrapper script, it doesn't apply indeed. Though in this case I would simply name the script steam-launcher and call it a day 🙂

Having multiple executables with the same name and relying on $PATH and absolute paths feels hackish to me, but that's only a matter of preference at this point.

Though in this case I would simply name the script steam-launcher and call it a day

The problem with that is that if another application tries to launch steam, it will bypass the script. And renaming the steam binary and give the original name to the script means that it has to be done every time steam is updated. Not to mention that if the script has a different name from the binary and I were to launch steam from the terminal to troubleshoot something it would also cause the issue again.

Having multiple executables with the same name and relying on $PATH and absolute paths feels hackish to me

https://github.com/ValveSoftware/steam-for-linux/issues/1890

The last comment in that issue is mine, compare my solution to the other solutions that people came out with and you will see it is the least hacky one, before I was even trashing the steam files if it had been launched accidentally in the wrong location lol.

Also this is how snaps and appimages integrate into the system. You add their location to PATH and it is done. You don't even need sudo to do these changes.

there are better options. I mean something like a bind mount

Do I need elevated privileges to do that?

I’m not saying we should get rid of $PATH right now.

I don't think we should ever get rid of it, the reasons it may have been created may not be needed today, doesn't mean it is no longer useful, like the several examples I just gave you.

Do you think the same of LD_LIBRARY_PATH? It is very useful for locally compiled applications, like i3 for example, which I compile and install into my system with a patch that is not merged into the released i3 package. Because installing it into the system /bin and /libs causes my package manager to complain that certain files already exist when updating/installing other applications.

Also do you feel the same about the XDG Base dir spec? like for example XDG_DATA_HOME, XDG_CONFIG_HOME, etc are environment variables like PATH which let you move their location around.

Right now overlays requires elevated privilèges, but ideally it shouldn't. Rewriting the Linux kernel to implement per user namespaces like plan9 does would allow unprivileged actions from any user (just like if any user was sitting in a container, overlayed from the base system).

I know we're not there, and that's not the direction development is going, but this thread is about dreams, right ? 😉

About the XDG specs, they serve a totally different purpose so they're out of the discussion IMO. I'm not advocating against env variables. Just $PATH which is a workaround as I see it, but your mileage may vary. As for your "issue" with steam, of course this is the best way to solve it. Because of today's OS limitation. My point is that with a better designed namespacing implementation, there would be more elegant solutions to solve it (and would get rid of the need to use LD_LIBRARY_PATH too, or literally any *_PATH env variable)

I'm not saying we should get rid of $PATH right now. My point is that it was created to solve a problem we don't have anymore (not enough disk capacity), but we still keep it out of habit.

As a reminder, the discussion is about what should be rewritten from scratch in linux. And IMO, we should get rid of $PATH as there are better options.

Wayland could already do with a replacement.

Wayland is incomplete and unfinished, not broken and obsolete and hopelessly bad design. PulseAudio was bad design. Wayland is very well designed, just, most things haven't been ported for it yet and some design by committee hell, but even that one is kind of a necessary tradeoff so that Wayland actually lasts a long time.

What people see: lol Firefox can't even restore its windows to the right monitors

What the Wayland devs see: so how can we make it so Firefox will also restore its windows correctly on a possible future VR headset environment where the windows maintain their XYZ and rotation placement correctly so the YouTube window you left above the stove goes back above the stove.

The Wayland migration is painful because they took the occasion to redo everything from scratch without the baggage of what traditional X11 apps could do, so there is less likely a need for a Wayland successor when new display tech arrives and also not a single display server that's so big its quirks are now features developers relied on for 20 years and essentially part of the standard.

There's nothing so far that can't be done in Wayland for technical implementation reasons. It's all because some of the protocols aren't ready yet, or not implemented yet.

Agreed, Wayland has a monumental task to do: replacing a 30+ year old windowing system.

X11 is 40 years old. I'd say it's been rather successful in the "won't need to be replaced for some time" category. Some credit where due.

There's nothing so far that can't be done in Wayland for technical implementation reasons. It's all because some of the protocols aren't ready yet, or not implemented yet.

I mean .. It doesn't matter why it can't be done. Just that it can't be done.

40 years old is also what makes it so hard to replace or even reimplement. The bugs are all decade old features, everything is written specifically for Xorg, all of which needs to be emulated correctly. It sure did serve us well, it's impressive how long we've managed to make it work with technology well beyond the imagination of the engineers in the 80s.

There's this for the protocols: https://github.com/probonopd/wayland-x11-compat-protocols

It can be done, it's just nobody wants to do it. It's not really worth the effort, when you can work on making it work properly in Wayland instead. That way you don't need XWayland in the first place, but also XWayland can then implement it using the same public API everyone else does so it works on every compositor.

There’s nothing so far that can’t be done in Wayland for technical implementation reasons.

Then make it fully X11 backwards compatible. Make Wayland X12. C'mon, they already admitted NVidia was right and are switching the sync and working to finally support the card they've been busting a hate boner over the driver simply because they're bigots against the licensing. Time to admit breaking the world was a mistake, too.

It's slowly happening. KDE can now do global Xwayland shortcuts, they also implemented XWaylandVideoBridge and compositor restart crash recovery for apps. We're getting proper HDR, we have proper per-monitor refresh rates and VRR, I can even hotplug GPUs. Some of that stuff works better in XWayland because we can just run multiple instances with different settings. For the particularly stubborn cases, there's rootful XWayland. X12 would have to break things too, and I doubt an Xorg rewrite would be all that much further than Wayland is. Canonical had a go at it too with Mir which was much less ambitious.

NVIDIA was right on that one indeed, but Wayland also predates Vulkan and was designed for GLES, pretty much at the tail end of big drivers and the beginning of explicit and low level APIs like Vulkan. They could very well have been right with EGLStream too, but graphics on Linux back then was, erm, bad. But in the end they're all still better than the kludge that is 3D in Xorg.

It's getting a lot of momentum and a lot of things are getting fixed lately. It went from unusable to "I can't believe it's not Xorg!" just this year for me. It's very nice when it works well. We'll get there.

At this point they could make it the best thing in the world. Won't ever fix the resentment they earned against us NVidia users, might fix some of the resentment from x11 folks... but that it needs a separate XWayland will always be a pain point. That's a kluge.

I can't up-vote this enough. The "architectural purists" have made the migration a nightmare. Always blaming everyone else for simply not seeing their genius. I'm honestly surprised it's gotten as far as it has.

Can't even update Firefox in place. Have to download a new copy, run it from the downloads folder, make a desktop shortcut myself, which doesn't have the Firefox icon.

Can't remember if that was mint or Ubuntu I was fiddling with, but it's not exactly user friendly.

Do not download Firefox of the internet. Use your package manager or flatpak

This has nothing to do with Wayland, it's just AppImages kinda sucking. Use Flatpak or the one in your distro's repos, not the AppImage. AppImages are the equivalent of portable apps on Windows, like the single exe ones you'd put on a flash drive to carry around.

Also the AppImage developer is very against Wayland and refuses to support it, which is why Wayland support is a shitshow on AppImages.

If you pick the Flatpak it'll get updated in the background, have a proper launcher and everything.

Seriously, I'm not a heavy software developer that partakes in projects of that scale nor complexity but just seeing it from the outside makes me hurt. All these protocols left-right and center, surely just an actual program would be cleaner? Like they just rewrite X from scratch implementing and supporting all modern technology and using a monolithic model.

Then small projects could still survive since making a compositor would almost be trivial, no need to rewrite Wayland from scratch cause we got "Waykit" (fictional name I just thought of for this X rewrite), just import that into your project and use the API.

That would work if the only problem they wanted to solve was an outdated tech stack for X. But there are other problems that wayland addresses too, like: how to scale multiple monitors nicely, is it a good idea to give all other apps the keystrokes that you do in the one in focus (and probably a lot more)

Wayland and X are very very different. The X protocol is a protocol that was designed for computer terminals that connected into a mainframe. It was never designed for advanced graphics and the result is that we have just built up a entire system that balances on a shoe box.

Wayland is a protocol that allows your desktop to talk to the display without a heavy server. The result is better battery life, simplified inputs, lower latency, better performance and so on

It is complex to build a Wayland compositor. When none existed, you had to build your own. So it took quite a while for even big projects like GNOME and KDE to work through it.

At this stage, there are already options to build a compositor using a library where most of the hard stuff is done for you.

https://github.com/swaywm/wlroots

https://github.com/CuarzoSoftware/Louvre

There will be more. It will not be long before creating Wayland compositors is easy, even for small projects.

As more and more compositors appear, it will also become more common just to fork an existing compositor and innovate on top.

One of the longer term benefits of the Wayland approach is that the truly ambitious projects have the freedom to take on more of the stack and innovate more completely. There will almost certainly be more innovation under Wayland.

All of this ecosystem stuff takes time. We are getting there. Wayland will be the daily desktop for pretty much all Linux users ( by percentage ) by the end of this year. In terms of new and exciting stuff, things should be getting pretty interesting in the next two years.

It's what happens when you put theory over practicality.

What we wanted: Wayland.

What we needed: X12, X13...

The X standard is a really big mess

That's kind of what I was trying to imply.

We needed a new X with some of the archaic crap removed. I.e. no one needs X primitives anymore, everything is its own raster now (or whatever it's called).

Evolving X would have given us incremental improvements over time... Eventually resulting in something like Wayland.

What was stopping X just undergoing some gutting? I get it's old and covered in dust and cobwebs but look, those can be cleaned off.

"Scoop out the tumors, and put some science stuff in ya", the company that produced that quote went on to develop the most advanced AGI in the world and macro-scale portable on-demand indestructible teleportation.

Because we no longer have mainframes in computer labs. Each person now has there own machine.

And yet I play modern games on modern hardware with X just fine. It's been extended a little bit since the 80s.

Yes it works but it everything is glued together with duct tape

So what? Clean that up then.

What part of 40 year old code that is so messed up that it's not cleanable any more do you not understand.

Of course it is. That's propaganda. It's hard, but possible. Probably not as hard as fighting Nvidia for 15 years either.

Simply put, no one with the necessary skills has come forward and demonstrated the willingness to do the work. No programmer I've ever met enjoys wrestling with other people's crufty old code. It isn't fun, it isn't creative, and it's often an exercise in, "What the unholy fsck was whoever wrote this thinking, and where did I put the 'Bang head here' mousepad?" So getting volunteers to mop out the bilges only happens when someone really wants to keep a particular piece of software working. It's actually more difficult than getting people to contribute to a new project.

So getting rid of X's accumulated legacy cruft isn't impossible, but I suspect someone would need to set up the "Clean up X" foundation and offer money for it to actually happen. (I'm no happier about that than you, by the way.)

1 more...
8 more...
8 more...
8 more...
8 more...
8 more...
8 more...
8 more...

No body wanted Wayland except the mad scientists and anti nvidia bigots that made it.

Imagine calling developers who have a cold relationship with Nvidia due to Nvidia doing the bare minimum for Linux development "bigots" lol

I think you must be a fanboy. "Bigotry" towards a multi trillion dollar company lmao. What an absurd thought.

I'm no fanboy of any video card. I just have ton of laptops with NVidia in them, and the bigots making Wayland never gave a darn about our plight... and then they started pushing distros to switch before they did anything to fix it. Their callous attitude toward the largest desktop linux userbase is insulting and pushing the distros before they fix the problem should be criminal. Every one of them should be put away for trying to ruin Linux by abandoning it's largest desktop user base. We dislike them, dislike them so much.

Now, will it keep us from using that crap when it finally works? No. We don't have much choice. They've seen to that. x11 will go the way of the dodo. But can we dislike them forever for dragging us through the mud until they were finally forced to fix the darn thing? Yeah. Wish them nothing but the worst.

Nobody is being "bigoted" to Nvidia lmao, get some perspective.

And if you're this butthurt Bout Wayland, don't use it. I've been using it for years without issue, because I didn't choose a hardware manufacturer that's actively hostile to Linux. Nvidia is too bigoted for me, unfortunately.

8 more...
8 more...
9 more...

Yup, Wayland is so old it already has old concepts. But it is also changing a lot

Needs to be replaced already. They're having to change to explicit sync, which they should have done from the start. So throw it out, start over, make X12.

9 more...

Libxz

One might exist already: lzlib.

I admit I haven't done a great deal of research, so maybe there are problems, but I've found that lzip tends to do better at compression than xz/lzma and, to paraphrase its manual, it's designed to be a drop-in replacement for gzip and bzip2. It's been around since at least 2009 according to the copyright messages.

That said, xz is going to receive a lot of scrutiny from now on, so maybe it doesn't need replacing. Likewise, anything else that allows random binary blobs into the source repository is going to have the same sort of scrutiny. Is that data really random? Can it be generated by non-obfuscated plain text source code instead? etc. etc.

Personally I quite like zstd, I find it has a pretty decent balance of speed to ratio at each of its levels.

Basically, install Linux on your daily driver, and hide your keyboard for a month. You'll discover just what needs quality of life revising

I'm tempted to say systemd-ecosystem. Sure, it has it's advantages and it's the standard way of doing things now, but I still don't like it. Journalctl is a sad and poor replacement from standard log files, it has a ton of different stuff which used to be their separate own little things (resolved, journald, crontab...) making it pretty monolithic thing and at least for me it fixed a problem which wasn't there.

Snapcraft (and flatpack to some extent) also attempts to fix a non-existing problem and at least for me they have caused more issues than any benefits.

Wayland, Pipewire, systemd, btrfs/zfs, just to name a few.

Wayland is THE replacement to broken, hack-driven, insecure and unmaintainable Xorg.

Pipewire is THE replacement to the messy and problematic audio stack on Linux to replace Pulseaudio, Alsa etc.

SystemD is THE replacement to SysVinit (and is an entire suite of software)

Like many, I am not a fan of SystemD and hope something better comes along.

The only thing I personally dislike about systemd is the "waiting for service to stop 5mins/1h30mins" stuff during shutdowns and reboots. I know I can limit them to 10s or something but how about just making systemd force-stop these services like, say, runit.

When I'm using my bemenu script to shutdown and feel like a hackerman, don't take that way from me by being an annoyance, systemd!!!

Edit: Yes, I'm considering switching to Void, how could you tell?

Yes, I know. I was answering the question of if there were instances of this happening.

Systemd. Nuke it from fucking orbit.

Everyone hates on it. Here I am; a simply Silverblue user and it seems fine to me. What is the issue actually?

Everyone doesn't. Just a handful of loud idiots who mostly don't work with init systems. It is objectively better. There are some things you could criticise, but any blanket statement like that is just category a.

Still a lot of zealotry going on judging by the votes... i say rewrite systemd in rust!