duncesplayed

@duncesplayed@lemmy.one
3 Post – 238 Comments
Joined 1 years ago

Just an FYI that at this rate it's only going to take another 115 years before Linux has 100% market share.

13 more...

Unfortunate title, but it's a good video and some good thoughts from both Linus and AC.

Interestingly, this video is just 2 years after Linus and Alan Cox had a bit of a blowup that caused AC to resign from the TTY subsystem. And even more interestingly, the blowup was specifically about the very topic they're discussing: not breaking userspace and keeping a consistent user experience. Linus felt AC had broken userspace unnecessarily, was too proud/stubborn to revert the change and save the user experience. AC felt Linus was trivializing how easy "just fix it" was going to be and threw up his hands and resigned.

I was curious if they were still on good terms after that, and it's nice to see that they were. For newcomers to Linux, Alan Cox used to be (in the 1990s) the undisputed Riker to Linus' Picard, the #2 in command, ready to take over all of Linux at a moment's notice. As we got into the 2000s, that changed, and this video (2011) was from the middle of a chaotic time for him. In 2009 he quit Red Hat, then joined Intel 2 years later, then quit shortly after that and has just a few years ago stopped kernel development permanently.

Holy shit. If I understand correctly, the trains were programmed to use their GPS sensors to detect if they were ever physically moved to an independent repair shop. If they detected that they were at an independent repair shop, they were programmed to lock themselves and give strange and nonsensical error codes. Typing in an unlock code at the engineer's console would allow the trains to start working normally again.

If there were a corporation-sized mirror, I don't know how NEWAG could look at itself in it.

3 more...

You just don't appreciate how prestigious it is to get a degree from Example U.

I'm going to reframe the question as "Are computers good for someone tech illiterate?"

I think the answer is "yes, if you have someone that can help you".

The problem with proprietary systems like Windows or OS X is that that "someone" is a large corporation. And, in fairness, they generally do a good job of looking after tech illiterate people. They ensure that their users don't have to worry about how to do updates, or figure out what browser they should be using, or what have you.

But (and it's a big but) they don't actually care about you. Their interest making sure you have a good experience ends at a dollar sign. If they think what's best for you is to show you ads and spy on you, that's what they'll do. And you're in a tricky position with them because you kind of have to trust them.

So with Linux you don't have a corporation looking after you. You do have a community (like this one) to some degree, but there's a limit to how much we can help you. We're not there on your computer with you (thankfully, for your privacy's sake), so to a large degree, you are kind of on your own.

But Linux actually works very well if you have a trusted friend/partner/child/sibling/whoever who can help you out now and then. If you've got someone to help you out with it, Linux can actually work very very well for tech illiterate people. The general experience of browsing around, editing documents, editing photos, etc., works very much the same way as it does on Windows or OS X. You will probably be able to do all that without help.

But you might not know which software is best for editing photos. Or you might need help with a specific task (like getting a printer set up) and having someone to fall back on will give you much better experience.

2 more...

This is the major reason why maintainers matter. Any method of software distribution that removes the maintainer is absolutely guaranteed to have malware. (Or if you don't consider 99% software on Google Play Store the App Store to be "malware", it's at the very least hostile to and exploitative of users). We need package maintainers.

If I can try to summarize the main findings:

  1. Computer-generated (e.g.., Stable Diffusion) child porn is not criminalized in Japan, and so many Japanese Mastodon servers don't remove it
  2. Porn involving real children is removed, but not immediately, as it depends on instance admins to catch it, and they have other things to do. Also, when an account is banned, the Mastodon server software is not sending out a "delete" for all of their posted material (which would signal other instances to delete it)

Problem #2 can hopefully be improved with better tooling. I don't know what you do about problem #1, though.

16 more...

You mean Linux isn't going to have 200% market share one day? Shit, I'm starting to think my calculations may have not been totally serious.

1 more...

Ironically neither GNU nor Linux has a clipboard (well GNU Emacs probably has like 37 of them for some reason). "Primary selection" (the other clipboard that people don't tell you about) started off on X11, which of course had to implement by XFree86, which became Xorg, and then it copied (ha ha) by other non-X-related software like gpm and toolkits like GTK when using Wayland.

Anne Frank advertising baby clothes before discussing the horrors of the Holocaust

Wow, that is amazingly inhumane.

My first thought is they're necessarily making characters who aren't people. A person who has lived through the Holocaust just cannot cheerfully peddle baby clothes. I don't mean that it's physically not possible because she's dead: I mean in terms of the human psyche, a person just flat-out psychologically could not do that. A young boy who succumbed to torture and murder psychology cannot just calmly narrate it.

So obviously, yeah, it's quite a ghoulish and evil thing to take what used to be a person, and a figure who has been studied and mourned because of their personhood, because we can relate to them as a person, and just completely strip them of their personhood and turn them into an inhumane object.

But then that leads to me the question of, who's watching these things, and why? The article says they got quite a lot of views. Is it just for shock value? I don't quite understand.

2 more...

You're just not cloud-native enough to understand how revolutionary it is to run GNOME on Fedora.

The "tooling" argument is kind of backwards when we're in the kernel. The package manager is not allowed to be used. Even the standard library is not allowed to be used. Writing code free of the standard library is kind of new in the Rust world and getting compiler support for it has been one of the major efforts to get Rust into the kernel. Needless to say tools around no-stdlib isn't as robust as in the user world.

2 more...

Has reddit not already been scraped? With all of that information exposed bare on the public Internet for decades, and apparently so valuable, I find it hard to believe that everybody's just been sitting there twiddling their thumbs, saying "boy I sure hope they decide to sell us that data one day so that we don't have to force an intern to scrape it for us".

4 more...

Yes, with some big "if"s. NextCloud can work very well for a large organization if that large organization has a "real" IT department. I use "real" to describe how IT departments used to work 20+ years ago, where someone from IT was expected to be on call 24/7, they built and configured their own software, did daily checks and maintenance, etc. Those sorts of IT departments are rare these days. But if they have the right personnel, it can definitely be done. NextCloud can be set up with hot failovers and fancy stuff like that if you know what you're doing.

When you power on a computer, before any software (any operating system) has a chance to run, there's "firmware" (kind of similar to software, except stored directly in the motherboard) that has to get things going (called "Platform Initialization"). Generally the two jobs of the Platform Initialization firmware: (1) to detect (and maybe initialize) some hardware; and (2) to find the operating system and boot the operating system.

We have a standard interface for #2 now, which is called UEFI. But for #1, it's always been sort of a mysterious black box. It necessarily has to be different for every chipset/every motherboard. Manufacturers never really saw much reason to open source it. The major community-driven open source project at doing #1 is called "coreboot". Due to the fact that it requires a new implementation for every chipset/motherboard and they are generally not documented (and may require some reverse-engineering of the hardware), coreboot has very very limited support.

So what AMD is open sourcing here is a collection of 3 C libraries which they will be using in all of their firmware, going forward. These libraries are not chipset/motherboard-specific (you still need custom code for each motherboard) and do not implement UEFI (you would still need to implement UEFI/bootloader on top of it), but they're helper functions that do a lot of what's needed to implement firmware. I just took a cursory look through the source code, but I saw a lot of code in there for detecting RAM DIMMs (how much RAM, what kind of RAM, etc.), which is useful code.

The fact that AMD is going to use this in their own firmware, and also make it available for coreboot under an MIT licence, means that coreboot may* have a much easier time in the future supporting AMD motherboards.

* we will see

No, that would be "too egotistical" (in Linus' own words). But he can have his friend who runs an FTP server completely ignore his wishes to have it named "Freax" and name the directory "linux" instead.

The principled "old" way of adding fancy features to your filesystem was through block-level technologies, like LVM and LUKS. Both of those are filesystem-agnostic, meaning you can use them with any filesystem. They just act as block devices, and you can put any filesystem on top of them.

You want to be able to dynamically grow and shrink partitions without moving them around? LVM has you covered! You want to do RAID? mdadm has you covered! You want to do encryption? LUKS has you covered? You want snapshotting? Uh, well...technically LVM can do that...it's kind of awkward to manage, though.

Anyway, the point is, all of them can be mixed and matched in any configuration you want. You want a RAID6 where one device is encrypted split up into an ext4 and two XFS partitions where one of the XFS partitions is in RAID10 with another drive for some stupid reason? Do it up, man. Nothing stopping you.

For some reason (I'm actually not sure of the reason), this stagnated. Red Hat's Strata project has tried to continue pushing in this direction, kind of, but in general, I guess developers just didn't find this kind of work that sexy. I mentioned LVM can do snapshotting "kind of awkward"ly. Nobody's done it in as sexy and easy way to do as the cool new COWs.

So, ZFS was an absolute bombshell when it landed in the mid 2000s. It did everything LVM did, but way way way better. It did everything mdadm did, but way way way better. It did everything XFS did, but way way way better. Okay it didn't do LUKS stuff (yet), but that was promised to be coming. It was Copy-On-Write and B-tree-everywhere. It did everything that (almost) every other block-level and filesystem previously made had ever done, but better. It was just...the best. And it shit all over that block-layer stuff.

But...well...it needed a lot of RAM, and it was licensed in a way such that Linux couldn't get it right away, and when it did get ZFS support, it wasn't like native in-the-kernel kind of stuff that people were used to.

But it was so good that it inspired other people to copy it. They looked at ZFS and said "hey why don't we throw away all this block-level layered stuff? Why don't we just do every possible thing in one filesystem?".

And so BtrFS was born. (I don't know why it's pronounced "butter" either).

And now we have bcachefs, too.

What's the difference between them all? Honestly mostly licensing, developer energy, and maturity. ZFS has been around for ages and is the most mature. bcachefs is brand spanking new. BtrFS is in the middle. Technically speaking, all of them either do each other's features or have each other's features on their TODO list. LUKS in particular is still very commonly used because encryption is still missing in most (all?) of them, but will be done eventually.

reddit does not pay its content creators anything. Unlike every other big tech social media platform, they also do not pay their moderators anything. They require all moderators to do all the work for them, for free. On top of that, they blast ads everywhere. And they sell sponsored posts and upvote blocks. And they somehow tricked users into believing that giving reddit money was an appropriate way to reward a different user for making a good post.

And despite all of that, they are unprofitable? Something doesn't smell right.

4 more...

If go is "round chess", I feel like chess should be "pointy chess".

This is what I don't get. Rewriting COBOL code into Java code is dead easy. You could teach a junior dev COBOL (assuming this hasn't been banned under the Geneva Convention yet) and have them spitting out Java code in weeks for a lot cheaper.

The problem isn't converting COBOL code to Java code. The problem is converting COBOL code to Java code so that it cannot ever possibly have even the most minute difference or bug under any possible circumstances ever. Even the tiniest tiniest little "oh well that's just a silly little thing" bug could cost billions of dollars in the financial world. That's why you need to pay COBOL experts millions of dollars to manage your COBOL code.

I don't understand what person looked at this problem and said "You know what never does anything wrong or makes any mistake ever? Generative AI"

In a certain light, you could argue that Linus doesn't really have any control at all. He doesn't write any code for Linux (hasn't in many years), doesn't do any real planning or commanding or managing. "All" he does is coordinate merges and maintain his own personal git branch. (And he's not alone in that: a lot of people maintain their own Linux branches). He has literally no formal authority at all in Linux development.

It just so happens that, by a very large margin, his own personal git branch is the most popular and trusted in the world. People trust his judgment for what goes in and doesn't go in.

It's not like Linux development is stopped because Linus goes offline (or goes on vacation or whatever). People keep writing code and discussing and testing and whatnot. It's just that without Linus's discerning eye casting judgment on their work, it doesn't enter the mainstream.

Nothing will really get slowed down. Whether something officially gets labelled by Linus as "6.8" or "6.whatever" doesn't really matter in the big picture of Linux development.

When Elon Musk wants to see your top 10 most salient lines of code.

And not all GNU is Linux! Beyond the world famous GNU Hurd, there's also Debian GNU/kFreeBSD, and Nexenta (GNU/Illumos, which is the OpenSolaris kernel).

I think the most esoteric of them, though, is GNU Darwin (GNU/XNU). Darwin is the open source parts of OS X, including its kernel, XNU. There used to be an OpenDarwin project to try to turn Darwin into an actual independent operating system, but they failed, and were superseded by PureDarwin, which took a harder line against anything OS X getting into the system. GNU Darwin took it one step further and removed just about all of Darwin (except XNU) and replaced it with GNU instead.

Find can actually do the sed itself if you don't want to use a subshell and a shell loop.

find . -type f -iname '*.json' -exec sed -i 's/"verified":"10"/"verified":"11"/' '{}' ';'
2 more...

I was saying Boo-urns.

To the best of my knowledge, this "drives from the same batch fail at around the same time" folk wisdom has never been demonstrated in statistical studies. But, I mean, mixing drive models is certainly not going to do any harm.

3 more...

I love the arrogant confidently incorrect at the end of the blog.

  • The comments in the code are wrong
  • The official documentation is wrong
  • The manpage is wrong
  • Every blog article ever written is wrong
  • Linus Torvalds is wrong
  • Everyone who knows what they're talking about is wrong
  • No, I don't know how to read kernel code. Why do you ask? You're wrong
  • Shut up. You're wrong

chess.com has cute profile pictures for its bots. Bullying "Martin" is just scientifically more fun than bullying "Stockfish Level 1".

Nobody is going to move a dotfile as a breaking change in any established software

We have oodles of counterexamples to this. GIMP did it, Blender did it, DOSBox did it, Libreoffice did it, Skype did it, Wireshark did it, ad nauseum. It's not really as big a deal as you make it to be (or a big deal at all). You have a transitional period where you look for config files in both locations, and mark the old location as obsolete.

1 more...

Isn't it Mac OS X 14? I.e., Mac OS 10.14?

4 more...

Is there a lemmy community for self-hosted GPTs, like LocalLLaMa from reddit?

2 more...

Out of curiosity, were you born roughly in the early 1990s? I asked because I could have written very much the same stuff as you, except shifted back 10 years. By the year 2000, in my view, the Internet was already locked down and was a completely shitty version of what I felt "the real Internet" was like. Technology in the late 1980s and early 1990s was (from my view) hopeful and optimistic, constantly getting better (computers doubling in speed and memory and getting cheaper every year), and by the early 2000s, it was just shitty AIM and MSN Messenger and Windows-only KaZaA garbage with MySpace and shitty centralization like that. MySpace completely shit all over the early web rings.

I've come to realize that it's always been shitty. That's my conclusion after going on a nostalgia trip and watching old Computer Chronicles shows and reading old computer articles from my golden age, now through adult glasses. I just didn't understand all the politics and power manoeuvres at the time because I was a stupid kid who just saw cool things. Look at all the cool and exciting and great stuff that was happening in the late 1980s and early 1990s that I thought was so wonderful, and realize that it was mostly just shitty attempts by shitty power-hungry companies trying to lock down something cooler that had happened earlier.

The difference in the early days I think is that companies wanted to control us and make our lives as terrible as possible. They just couldn't because computers weren't powerful enough yet.

2 more...

It's in Proverbs 11:20

The C++ developers are an abomination to the Lord,
But the Rustaceans in their Rust-based OSes are His delight.

Before getting home Internet access, my "online" world was BBSes. Local BBSes, of course, because we couldn't dial long distance without repercussions. My favourite demogroup was Future Crew and I hated that it took months (or sometimes never) to get their releases on our local BBSes. Even with Fidonet, a lot of BBSes would only sync with remote nodes a couple times a month to save money, so it was slow going.

I remember a few days after we got home Internet access, I was eating breakfast and I suddenly had a thought. Wait...doesn't Future Crew's BBS run an FTP server? I think I saw them mention that in one of their nfo files. If they have an FTP server, I could just...connect to it. Like, directly, myself, from my house.

The implications of this were so strong that I started shaking. I couldn't finish my breakfast.

I ran downstairs and booted up the computer and typed in ftp.mpoli.fi and...there it was. Future Crew's home BBS was just available for anyone in the world to connect to. I navigated around a little bit and found a song I hadn't seen before on any of the local BBSes. I started the download, and it worked, and a blazing 3kB/s. I remember I just started crying at the implications of what a worldwide network meant.

Educator here. This is called "discovery learning". (The alternative to discovery learning, "direct instruction", would be if someone had told OP about these permissions before OP got themselves into a pickle)

When discovery learning is successful, it leads to better learning outcomes. Compared to direct instruction, you learn the material more deeply and will have better recall of the material, often for the rest of your life. The downsides to discovery learning are that it's very time-consuming, very frustrating, and many students will just fail (give up) before learning is completed.

Consider yourself one of the lucky ones, OP.

1 more...

Let's not rule out Æ

3 more...

He's not completely wrong with the powermod=landed gentry analogy (and the implied spez=king analogy, for that matter). People have been (weakly) protesting and trying (not very successfully) to leave over the powermod situation for years, and it's true that the powermods aren't friends of ours.

But he seems to be suggesting that the protests are just the actions of the powermods, as if other users (and smaller mods) aren't also leaving. I think he's going to be disappointed when he discovers that the peasantry are also upset. They just don't have has his ear because he's so removed from them, so all he hears are the powermods.

Can't have a privilege escalation when there are no privileges, since every process runs in the same address space in ring 0.

This is what I'm expecting. A year from now someone will mention "reddit" to me and I'll be like "that's still around?" and I'll check it out and it's just turned into TikTok challenges.