nyan

@nyan@sh.itjust.works
0 Post – 53 Comments
Joined 3 months ago

The Gentoo news post is not about having /bin and /usr/bin as separate directories, which continues to work well to this day (I should know, since that's the setup I have). That configuration is still supported.

The cited post is about having /bin and /usr on separate partitions without using an iniramfs, which is no longer guaranteed to work and had already been awfully iffy for a while before January. Basically, Gentoo is no longer jumping through hoops to make sure that certain files land outside /usr, because it was an awful lot of work to support a very rare configuration.

Looks like mid-to-high-level difficulty if you really want to build from source, due to multiple complex interdependent configuration flags that have to match your hardware, and the need to check a kernel option or two. (Based on the Gentoo ebuild for mesa 24.1.2).

Because distros from the Debian family are more popular, any random help article aimed at beginners is likely to assume one of those distros. (If you know how to map from apt to rpm, you're probably not a beginner.) Plus, I don't trust Red Hat, who have a strong influence on Fedora.

(Note that I don't generally recommend my own distro—Gentoo—to newcomers either, unless they have specific needs best served by it.)

4 more...

Red Hat's interests often don't seem to be aligned with those of the average user. The result is that they push for the adoption of software and conventions that make things better for businesses running RHEL, but worse for almost everyone else. This goes back a long way, and makes me question the long-term suitability of any distro Red Hat is involved in for any user who is not paying them for support. It's the pattern that bothers me, not any single event (and yes, part of that pattern does arise from the fact that they're a for-profit corporation).

It's the sort of thing that many people won't really care about, and if the alternative was Microsoft or even Canonical (which is prone to weird fits of NIH and bad monatization ideas), then fine, I would go with Red Hat. Still, I would recommend a community distro above anything that a corporation has its fingers in.

sudo is already an optional component (yes, really—I don't have it installed). Don't want its attack surface? You can stick with su and its attack surface instead. Either is going to be smaller than systemd's.

systemd's feature creep is only surpassed by that of emacs.

9 more...

Dude. I actually have sources for most of my installed packages lying around, because Gentoo. Do you know how much space that source code takes up?

Just under 70GB. And pretty much everything but maybe the 10GB of direct git pulls is compressed, one way or another.

That means that even if your distro is big and has 100 people on development, they would each have to read 1GB or more of decompressed source just to cover the subset of packages installed on my system.

How fast do you read?

2 more...

Gnome and other desktops need to start working on integrating FOSS

In addition to everything everyone else has already said, why does this have anything to do with desktop environments at all? Remember, most open-source software comes from one or two individual programmers scratching a personal itch—not all of it is part of your DE, nor should it be. If someone writes an open-source LLM-driven program that does something useful to a significant segment of the Linux community, it will get packaged by at least some distros, accrete various front-ends in different toolkits, and so on.

However, I don't think that day is coming soon. Most of the things "Apple Intelligence" seems to be intended to fuel are either useless or downright offputting to me, and I doubt I'm the only one—for instance, I don't talk to my computer unless I'm cussing it out, and I'd rather it not understand that. My guess is that the first desktop-directed offering we see in Linux is going to be an image generator frontend, which I don't need but can see use cases for even if usage of the generated images is restricted (see below).

Anyway, if this is your particular itch, you can scratch it—by paying someone to write the code for you (or starting a crowdfunding campaign for same), if you don't know how to do it yourself. If this isn't worth money or time to you, why should it be to anyone else? Linux isn't in competition with the proprietary OSs in the way you seem to think.

As for why LLMs are so heavily disliked in the open-source community? There are three reasons:

  1. The fact that they give inaccurate responses, which can be hilarious, dangerous, or tedious depending on the question asked, but a lot of nontechnical people, including management at companies trying to incorporate "AI" into their products, don't realize the answers can be dangerously innacurate.
  2. Disputes over the legality and morality of using scraped data in training sets.
  3. Disputes over who owns the copyright of LLM-generated code (and other materials, but especiallly code).

Item 1 can theoretically be solved by bigger and better AI models, but 2 and 3 can't be. They have to be decided by the courts, and at an international level, too. We might even be talking treaty negotiations. I'd be surprised if that takes less than ten years. In the meanwhile, for instance, it's very, very dangerous for any open-source project to accept a code patch written with the aid of an LLM—depending on the conclusion the courts come to, it might have to be torn out down the line, along with everything built on top of it. The inability to use LLM output for open source or commercial purposes without taking a big legal risk kneecaps the value of the applications. Unlike Apple or Microsoft, the Linux community can't bribe enough judges to make the problems disappear.

If I recall correctly, ext3 is ext2 with journalling on top, so they can't really get rid of ext2 without also ditching ext3.

Well, I can still boot my system without an initram (although that isn't just due to the kernel config)—does that count?

Other than that, custom kernels free up a small amount of disk space that would otherwise be taken up by modules for driving things like CANbus, and taught me a whole lot about the existence of hardware and protocols that I will never use.

Delete all the code. Then you'll have no bugs.

1 more...

To echo others here, you really need to kill the driver. There are a couple of different kernel modules that might be involved, depending on exactly how your touch panel is connected to the rest of the system. Software that has no specific touch support will likely treat your renegade hardware as a mouse, rather than ignoring it.

You may be able to unbind the driver from the device, see this discussion on stackexchange.

should still already choose their sexuality

Sexuality is not a choice, any more than your skin colour is a choice.

You don't have a cat, do you?

It stopped being secret a couple of years ago.

Which is why Electron reminds me of a little kid who's just done some extremely difficult but utterlly pointless thing.

Websites belong in a browser. If it doesn't work in any random standards-compliant browser, then you should be delivering it as a true native application, not some horrific fiji-mermaid-esque hybrid.

5 more...

Wayland has better support for some newer in-demand features, like multiple monitors, very high resolutions, and scaling. It's also carrying less technical debt around, and has more people actively working on it. However, it still has issues with nvidia video cards, and there are still a few pieces of uncommon software that won't work with it.

The only alternative is X. Its main advantage over Wayland is network transparency (essentially it can be its own remote client/server system), which is important for some use cases. And it has no particular issues with nvidia. However, it's essentially in maintenance mode—bugs are patched, but no new features are being added—and the code is old and crufty.

If you want the network transparency, have an nvidia card (for now), or want to use one of the rare pieces of software that doesn't work with Wayland/XWayland, use X. Otherwise, use whatever your distro provides, which is Wayland for most of the large newbie-friendly distros.

Rolling-release Linux distros such as Arch, Gentoo, and OpenSUSE Tumbleweed constantly release the latest updates, but they’re not used in businesses.

So some businesses decided that monolithic releases were more important than being able to get the latest upstream vanilla kernel version, and somehow that's the fault of "all Linux kernel vendors" (including rolling-release distros, since there was no attempt to qualify "all") and not the businesses' decisions about tradeoffs?

It may help to know a bit of history: KDE3 themes could include a bespoke widget style, and QT3 widget styles were always implemented as executables (you can look at modified versions of the C++ code in the TDE git repository, if you're really bored). So keeping code out of the themes hasn't been important to KDE for at least the past 20 years. If I'm not mistaken, far more things are stylable in current versions of KDE. That doesn't mean that every theme will style all of them, though—you can have codeless styles like the one you found, that make use of the built-ins rather than trying to change All The Things.

What I've always wondered about that one is: why bother forbidding Google but not 'man tar'? 🤨

4 more...

One thing no one seems to have touched on yet: distros have philosophies—guiding principles that affect what packages they have and how they're presented.

For instance, Debian is strongly open-source-centric. Closed-source software is not normally found in their main repository (even when it would be useful to put it there, like some drivers).

Gentoo, on the other hand, is all about user choice. You're expected to choose for yourself whether you want to use systemd or OpenRC, X or Wayland, which DE (if any) you want to use, and which features you do or don't want compiled into your software. However, Gentoo is quite happy to include closed-source software in the main package repository, because using it is also a choice that some people prefer to make.

Red Hat, Arch, and Slackware (to name the remaining major foundational distros) also have their own philosophies. Some descendant distros retain their parents' principles, while others discard them and develop a philosophy of their own (Ubuntu doesn't have Debian's Open Source Uber Alles tendencies, for instance).

Ships with Gentoo by default, since you actually need a nongraphical editor there and nano is easier to learn than vi or emacs.

Does a WinNT clone count as "truly different", though? Maybe Haiku would have been a better choice for that.

Sometimes raising the barrier to entry is a good thing.

Many Electron applications I've run across don't make even a try at loading system settings. For me, that causes accessibility issues related to photosensitivity. For some reason, feeling like I've been stabbed in the eyeball when I try to open a program does not endear me to it or its framework.

No application at all is actually better than something built on Electron, as far as I'm concerned, because then there's a chance that someone, somewhere, might fill in the gap with software I can actually use.

Electron needs to either actually provide the basics of native functionality, or go away.

In the specific case of xz-utils, many lazy people would never have been at risk because the issue is limited to xz-utils 5.6.x (a quite recent version). Not updating provided (unusually) a mitigation in this case.

You sometimes can build software that will work with more than one version of a C library, but less and less software is being written that binds only to C libraries. The key topic you want to look up is probably "ABI stability".

Gentoo + OpenRC + TDE (therefore X) on both a first-gen Threadripper desktop with 96GB RAM and a laptop from 2008 with an Athlon64x2 processor and 2GB RAM. Updating gcc on the laptop can take a while, but it still serves well enough. Plus a couple of headless Pis that are also running Gentoo. Not overly unusual, but I may well have the only Threadripper of that gen running that specific distro and DE combination anywhere in the world, since each individual item is kind of low probability.

Reading between the lines on the gentoo-dev mailing list, I gather that the old system just was not working very well, with friction between the Foundation and the technical side of the distro.

The only thing I miss about fusion 360 is an easy way to add fillets to parts, that can be tricky in openscad. I use chamfers for the most part though, so I don’t miss it much.

There's an OpenSCAD add-on lib called BOSL that offers primitives with built-in fillet options (plus a wide array of other stuff, like premodeled metric bolts). Admittedly it spends a lot of time reinventing the wheel, but I've found it useful from time to time.

That would gum up the belt on the sander, which surely is not responsible for the thumbnails.

Even better: "#1 in the count y" with a damaged area right before the y. Missing letter or just bad kerning? No one will be able to tell until after they taste the pizza! 😜

I have a laptop of that era (2008 HP Pavilion, Athlon64x2, 2GB RAM, 100GB HDD). It runs the Trinity Desktop Environment, which works just as well now as it did when that laptop was a flagship machine. (Updating a Gentoo system running on a machine that old is a bit time-consuming, mind you, but that isn't the DE's fault.)

I've tried several of the other lighter-weight DEs—XFCE, LXDE, Lumina, Gnome2 before it became MATE—but TDE does what I need it to do, and (just as importantly) the development team prefers to work on features and compatibility rather than tearing out things that still work or forcing new paradigms that don't really make sense for my use case onto me. It's there, it's solid, and I've already learned its quirks, so I can save my brain cells for learning useful features in other programs rather than having to figure out where the control for some bit of the GUI ran off to this time. Why would I use anything else? The thing I want most from my DE is for it to stay out of the way and not keep me from using other software.

(Plus, Konqueror may no longer be useful as a web browser, but it's still a better file manager than, say, Thunar, which I found to be a pain in the arse when I tried XFCE.)

Well, you do have to feed, er, update it at least every six months if you don't want to be left with an unholy mess to clean up.

1 more...

I agree that Gentoo will probably work, as it still has functional i486 support. Be aware that you may be spending a lot of time compiling if you go that route and don't have a second, faster machine to use for distcc or the like.

As for the nvidia card, the proprietary driver won't work for something of that age. Check the supported cards in Nouveau (and maybe even the really old drivers for prehistoric cards). In a pinch, the vesa driver should work. Good luck.

ext4 is still solid for most use cases (I also use it). It's not innovative, and possibly not as performant as the newer file systems, but if you're okay with that there's nothing wrong with using it. I intend to look into xfs and btrfs the next time I spin up a new drive or a new machine, but there's no hurry (and I may not switch even then).

There's an unfortunate tendency for people who like to have the newest and greatest software to assume that the old code their new-shiny is supposed to replace is broken. That's seldom actually the case: if the old software has been performing correctly all this time, it's usually still good for its original use case and within the scope of its original limitations and environment. It only becomes truly broken when the appropriate environment can't be easily reproduced or one of the limitations becomes a significant security hole.

That doesn't mean that shiny new software with new features is bad, or that there isn't some old software that has never quite performed properly, just that if it ain't broke, it's okay to set a conservative upgrade schedule.

Which means that if you have a flatpak with an uncommon library and the dev stops issuing updated flatpaks because they get hit by a bus, you could be SOL with respect to that library. Distro libs are less likely to have this happen because very few distros have a bus factor of 1—there's usually someone who can take over.

1 more...

I've got a 6TB SATA HDD (also formatted ext4) and while files on it don't always open instantaneously, the pause is only a fraction of a second at most (barely enough to notice). So I'll join the chorus suggesting you check for hardware issues—bad drive, bad or loose cables, or a bad controller on the mobo.

Granted, in a true multiuser environment with an admin who's carefully tailoring /etc/sudoers to make sure everyone has the least possible privileges that will allow them to still do what they need, sudo is more secure. There's no doubt of that.

On a machine that has only one human user who's also the admin, and retains the default sudo-with-user-passwords configuration, su vs sudo is pretty much a wash, security-wise. su requires a second password to get root access, but sudo times out and requires the password to be re-entered while a shell created by su can stay open indefinitely. Which is more easily broken will depend on other details of your situation.

(If you're running an incorrectly configured ssh server that allows direct root login with only password authentification, having a root password could contribute to problems, but the correct fix there is to reconfigure the ssh server not to do something so stupid. I hope there's no distro that still ships that way out of the box.)

The method of last resort would be to place the files you want shared on a virtual drive that you pass to the Windows guest, then mount that virtual drive as a loopback device under Linux. I have only done this for very old versions of Windows that can't talk to normally configured current versions of Samba, so I don't know how it will behave with 11—simultaneous access doesn't really work with my Win 98SE guest, but the method is adequate for passing files back and forth.

(The person who suggested wsdd2 is likely right on the money, though—a Samba share won't be browsable without it from modern Windows, although you may still be able to connect to it blind if you know the address. Problem is, in my experiences with Samba, the address was usually not quite what I expected . . .)

Simply put, no one with the necessary skills has come forward and demonstrated the willingness to do the work. No programmer I've ever met enjoys wrestling with other people's crufty old code. It isn't fun, it isn't creative, and it's often an exercise in, "What the unholy fsck was whoever wrote this thinking, and where did I put the 'Bang head here' mousepad?" So getting volunteers to mop out the bilges only happens when someone really wants to keep a particular piece of software working. It's actually more difficult than getting people to contribute to a new project.

So getting rid of X's accumulated legacy cruft isn't impossible, but I suspect someone would need to set up the "Clean up X" foundation and offer money for it to actually happen. (I'm no happier about that than you, by the way.)

They're unlikely to do worse than my laptop from 2008, and it's perfectly usable under Linux (bit of a lag when starting up large programs, that's all). As has already been said, go for a lighter desktop environment (XFCE, LXQT, Mate, TDE) unless these machines were high-spec'd for their era. For the oldest machines, you might want to consider installing Puppy Linux rather than one of the more mainstream distributions, since Puppy specializes in old x86-family hardware.