LeFantome

@LeFantome@programming.dev
0 Post – 987 Comments
Joined 1 years ago

Well, since they missed 24.10 as well, it is going to be 25.04 at this point.

I hate to fanboy but EndeavourOS is awesome.

I read this on my 2013 MacBook Air 2013 running EndeavourOS. It runs amazingly well including video meetings.

5 more...

I just put one down as I walked away from the couch a few minutes ago. :)

I bought it to carry in my backpack in Europe. Super light. Super handy. And inexpensive enough that I did not worry too much of it being lost, broken, or stolen ( which it never was ).

My advice as well!

It is more important what it can be upgraded to. RAM will be cheaper tomorrow ( historically ).

The problem is the non-upgradable trend in laptops. Ironically I have MacBooks from 2012 with 16 GB in them but much never ones that are stuck at 8.

I am running Wayland on my 2013 MacBook Air. Joe old is your hardware?

2 more...

Other way around

What country?

Nice timing. I just switched a desktop to Alpha 2 earlier today.

What is new in Alpha 3? New functionality? Or just more polish?

Microsoft must make 40% of their revenue off of Azure at this point. I would not be surprised if more than 50% of that is on Linux. Windows is probably down to 10% ( around the same as gaming ).

https://www.kamilfranek.com/microsoft-revenue-breakdown/

Sure there are people in the Windows division who want to kill Linux and some dev dev folks will still prefer Windows. At this point though, a huge chunk of Microsoft could not care less about Windows and may actually prefer Linux. Linux is certainly a better place for K8S and OCI stuff. All the GPT and Cognitive Services stuff is likely more Linux than not.

Do people not know that Microsoft has their own Linux distro? I mean an installation guide is not exactly their biggest move in Linux?

6 more...

Wayland is the future. It has already surpassed X11 in many ways. My favourite comment on Phoronix was “When is X11 getting HDR? I mean, it was released 40 years ago now.”

That said, the fact that this pull request came from Valve should carry some weight. Perhaps Wayland really is not ready for SDL.

I do not see why we need to break things unnecessarily as we transition. This is on the app side. Sticking with X11 for SDL ( for now ) does not harm the Wayland transition in any way. These applications will still work fine via Xwayland.

Sure, a major release like 3.0 seems like a good place to make the switch. In the end though, it is either ready or it is not. If the best path for SDL is to keep the default at X11 then so be it ( for now ).

If we are marking the birth of Linux and trying to call it GNU / Linux, we should remember our history.

Linux was not created with the intention of being part of the GNU project. In this very announcement, it says “not big and professional like GNU”. Taking away the adjectives, the important bit is “not GNU”. Parts of GNU turned out to be “big and professional”. Look at who contributes to GCC and Glibc for example. I would argue that the GNU kernel ( HURD ) is essentially a hobby project though ( not very “professional” ). The rest of GNU never really not that “big” either. My Linux distro offers me something like 80,000 packages and only a few hundred of them are associated with the GNU project.

What I wanted to point out here though is the license. Today, the Linux kernel is distributed via the GPL. This is the Free Software Foundation’s ( FSF ) General Public License—arguably the most important copyleft software license. Linux did not start out GPL though.

In fact, the early goals of the FSF and Linus were not totally aligned.

The FSF started the GNU project to create a POSIX system that provides Richard Stallman’s four freedoms and the GPL was conceived to enforce this. The “free” in FSF stands for freedom. In the early days, GNU was not free as in money as Richard Stallman did not care about that. Richard Stallman made money for the FSF by charging for distribution of GNU on tapes.

While Linus Torvalds as always been a proponent of Open Source, he has not always been a great advocate of “free software” in the FSF sense. The reason that Linus wrote Linux is because MINIX ( and UNIX of course ) cost money. When he says “free” in this announcement, he means money. When he started shipping Linux, he did not use the GPL. Perhaps the most important provision of the original Linux license was that you could NOT charge money for it. So we can see that Linus and RMS ( Richard Stallman ) had different goals.

In the early days, a “working” Linux system was certainly Linux + GNU ( see my reply elsewhere ). As there was no other “free” ( legally unencumbered ) UNIX-a-like, Linux became popular quickly. People started handing out Linux CDs at conferences and in universities ( this was pre-WWW remember ). The Linux license meant that you could not charge for these though and, back then, distributing CDs was not cheap. So being an enthusiastic Linux promoter was a financial commitment ( the opposite of “free” ).

People complained to Linus about this. Imposing financial hardship was the opposite of what he was trying to do. So, to resolve the situation, Linus switched the Linux kernel license to GPL.

The Linux kernel uses a modified GPL though. It is one that makes it more “open” ( as in Open Source ) but less “free” ( as in RMS / FSF ).

Switching to the GPL was certainly a great move for Linux. It exploded in popularity. When the web become a thing in the mid-90’s, Linux grew like wild fire and it dragged parts of the GNU project into the limelight wit it.

As a footnote, when Linus sent this announcement that he was working on Linux, BSD was already a thing. BSD was popular in academia and a version for the 386 ( the hardware Linus had ) had just been created. As BSD was more mature and more advanced, arguably it should have been BSD and not Linux that took over the world. BSD was free both in terms or money and freedom. It used the BSD license of course which is either more or less free than the GPL depending on which freedoms you value. Sadly, AT&T sued Berkeley ( the B in BSD ) to stop the “free”‘ distribution of BSD. Linux emerged as an alternative to BSD right at the moment that BSD was seen as legally risky. Soon, Linux was reaching audiences that had never heard of BSD. By the time the BSD lawsuit was settled, Linux was well on its way and had the momentum. BSD is still with us ( most purely as FreeBSD ) but it never caught up in terms of community size and / or commercial involvement.

If not for that AT&T lawsuit, there may have never been a Linux as we know it now and GNU would probably be much less popular as well.

Ironically, at the time that Linus wrote this announcement, BSD required GCC as well. Modern FreeBSD uses Clang / LLVM instead but this did not come around until many, many years later. The GNU project deserves its place in history and not just on Linux.

9 more...

Jellyfin has been rock solid for me, especially since the move to .NET 8. Looking forward to this release.

Manjaro because it is a bait and switch trap. Seems really polished and user friendly. You will find out eventually it is a system destroying time-bomb and a poorly managed project.

Ubuntu because snaps.

The rest are all pros and cons that are different strokes for different folks.

An amazing accomplishment and a real feather in the cap for Rust.

When are we going to see this incorporated into the mainline kernel?

I am a pretty big fan of Open Source and have used Linux myself since the early 90’s. Most governments are not going to save money switching to Open Source. At least not within say the term of a politician or an election cycle. Probably the opposite.

This kind of significant shift costs money. Training costs. Consultants. Perhaps hardware. It would not be at all surprising if there are custom software solutions in place that need to be replaced. The dependencies and complexities may be significant.

There are quite likely to be savings over the longer term. The payback may take longer than you think though.

I DO believe governments should adopt Open Source. Not just for cost through. One reason is control and a reduction of influence ( corruption ). Another is so that public investment results in a public good. Custom solutions could more often be community contributions.

The greatest savings over time may actually be a reduction in forced upgrades on vendor driven timelines. Open Source solutions that are working do not always need investment. The investment could be in keeping it compatible longer. At the same time, it is also more economic to keep Open Source up to date. Again, it is more about control.

Where there may be significant cost savings is a reduction in the high costs of “everything as a service” product models.

Much more important than Open Source ( for government ) are open formats. First, if the government uses proprietary software, they expect the public to use it as well and that should not be a requirement. Closed formats can lead to restrictions on what can be built on top of these formats and these restrictions need to be eliminated as well. Finally, open formats are much, much more likely to be usable in the future. There is no guarantee that anything held in any closed format can be retrieved in the future, even if the companies that produced them still exist. Can even Microsoft read MultiPlan documents these days? How far back can PageMaker files be read? Some government somewhere is sitting on multimedia CD projects that can no longer be decoded.

What about in-house systems that were written in proprietary languages or on top of proprietary databases? What about audio or video in a proprietary format? Even if the original software is available, it may not run on a modern OS. Perhaps the OS needed is no longer available. Maybe you have the OS too but licenses cannot be purchased.

Content and information in the public record has to remain available to the public.

The most important step is demanding LibreOffice ( or other open ) formats, AV1, Opus, and AVIF. For any custom software, it needs to be possible to build it with open compilers and tools. Web pages need to follow open standards. Archival and compression formats need to be open.

After all that, Open Source software ( including the OS ) would be nice. It bothers me less though. At that lobby, it is more about ROI and Total Cost of Ownership. Sometimes, proprietary software will still make sense.

Most proprietary suppliers actually do stuff for the fees they charge. Are governments going to be able to support their Open Source solutions? Do they have the expertise? Can they manage the risks? Consultants and integrators may be more available, better skilled, amd less expensive on proprietary systems. Even the hiring process can be more difficult as local colleges and other employers are producing employees with expertise in proprietary solutions but maybe not the Open Source alternatives. There is a cost for governments to take a different path from private enterprise. How do you quantify those costs?

Anyway, the path to Open Source may not be as obvious, easy, or inexpensive as you think. It is a good longer term goal though and we should be making progress towards it.

1 more...

Except that they are not expecting to merge this into RHEL. They are sending it to CentOS Stream.

The reason this is a first is because AlmaLinux recently decided to stop being a bug for bug clone of RHEL. Good for them. As this shows, they are now actually free to add value.

Rocky Linux cannot ship this patch unless RHEL does. Rocky cannot contribute anything by definition.

The article says this will be “based on Ubuntu” but it will probably actually just be Ubuntu with custom defaults, pre-installed software, and maybe repositories.

This just makes sense in my view. The cost relative to the number of machines they must deploy will be miniscule. If they do not mess with the core system too much, they can outsource almost all the admin and expertise to Canonical in terms of security and packaging. People saying this will blow up. Why? It does not sound like they are really creating a full distro from scratch. Is Ubuntu not viable?

In terms of why crating a custom version instead of just using actual Ubuntu. Again, the cost of customizing a distro can be dramatically less than making even simple configurations on every system after the fact. They can standardize what the desktop will look like and set key defaults. They can choose what applications are installed by default. They can remove applications from the repository that they do not want to be installed. The can ensure that localization is done well, etc.

2 more...

The creator here buys hardware designed to run Linux ( from Tuxedo ) and uses the distro designed to run on that hardware. So no big surprise he thought all hardware problems were solved.

Probably a similar issue with office apps. He is not trying to share an Outlook calendars with coworkers or deal with complex spreadsheets sent from colleagues. He is probably not sending many PowerPoints to his boss.

Good on him for reaching out to his community to ensure his opinions are backed by evidence.

1 more...

I actually do not like Discord and wish they did not use it. That said “absolutely no reason not to use Matrix” is clearly an objectively untrue statement.

Andreas has always been very pragmatic. He will choose the tools he likes best.

Linux does this all the time.

ALSA -> Pulse -> Pipewire

Xorg -> Wayland

GNOME 2 -> GNOME 3

Every window manager, compositor, and DE

GIMP 2 -> GIMP 3

SysV init -> SystemD

OpenSSL -> BoringSSL

Twenty different kinds of package manager

Many shifts in popular software

1 more...

My concern with this take is that it positions the switch as all downsides. You do not get any of the Linux benefits, just the compromised experience on Windows. You may decide it is not worth it even before switching.

If you are a Linux user and like commercial games, you probably would prefer them to work on Linux.

“Market share” on Linux aligns the vested interest of game makers and Linux game players. If the company thinks it can make money, it will do more to allow games to run, or at least do less to stop them.

I may actually give this a go.

With the addition of non-free firmware in Debian ( so better hardware compatibility ) and the rising popularity of Flatpak and Distrobox ( so access to newer software ), the advantages of Ubuntu are narrowing and the problems with Ubuntu continue to mount. Basing something like Mint directly on Debian makes sense to me.

I have been considering trying Debian with Distrobox / Arch to fill any application gaps. LMDE might fill that void instead.

2 more...

In my view, the “community” reaction was terrible. Regardless of if you agree with them or not, the response should be honest and rational. I found the reaction, emotional, political, and frankly dishonest. The response was that Red Hat was suddenly going proprietary, that they were violating the GPL, and / or that they were “taking” the work of untold legions of free software volunteers without giving back. They were accused of naked corporate greed by companies whose whole business is based on using Red Hat’s work without paying ( peak hypocrisy ).

Let’s start with what they actually did. Red Hat builds RHEL first by contributing all their code and collecting all the Open Source packages they use into a distribution called CentOS Stream. Once in a while, they fork that and begin building a new release of RHEL. That requires lots of testing, packaging, configuration, documentation, and other work required to make RHEL above and beyond the source code. Previously, they made the output of all this work publicly available. What they did was stop that. So, what does it look like now?

Red Hat now only distributes the RHEL SRPM packages to their subscribers ( which may be paying customers or getting it free ). The support agreement with Red Hat says that, if you distribute those to others, they will cancel your subscription. That is the big controversy.

What you cannot do now is “easily” build a RHEL clone that is guaranteed “bug for bug” compatible with RHEL and use it to compete with Red Hat. You will notice that those making the most noise, like Rocky Linux, want to do that.

So, are Red Hat violating the GPL? No.

First, Red Hat distributes all the code to make RHEL to the actual people they “distribute to” ( to their subscribers ) including everything required to configure and build it. This is everything required by the GPL and more.

Second, less than half of the code in RHEL is even GPL licensed. The text of the GPL itself says that the requirements of the GPL do not extend to such an “aggregate” ( the term the GPL itself uses ). So, Red Hat is going quite above and beyond the licensing by providing their subscribers code to the entire distribution. Yes, beyond.

Third, CentOS Stream remains open to everybody. You can build a Linux distribution from that that is ABI compatible with RHEL. That is what Alma Linux is doing now. Red Hat contributes mountains of free software to the world, both original packages and contributions to some of the most important packages in the free software world. Red Hat is not required to license packages they author under the GPL but they do. They are not required to make all of CentOS Stream available to the public but they do. They are certainly not freeloaders.

But what about this business of cancelling subscriptions? Isn’t that a restriction in violation of the GPL? Not in my view.

The GPL says that you are free to distribute code you receive under the GPL without fear of being accused of copyright violation. It says you can modify the code and distribute your changes. It says you can start a business in top of that code and nobody can stop you. Do RHEL subscribers enjoy all these freedoms. Yes. Yes they do.

What happens ( after the change ) when a RHEL subscriber violates the terms of their subscriber agreement? Well, they cease to be a subscriber. Does this mean they lose access to the source they got from RHEL? No. Does it mean they can be sued for distributing the code? No. I mean, you could risk trademark violation if you sell it I guess.

So, what does it mean that RHEL cancels your subscription? Well, it means they will no longer support you. I hope people see that as fair. It also means as bs they will no longer distribute their software to you IN THE FUTURE.

That is it. That is the outrage.

If you give away the results of Red Hat’s hard work to productize CentOS Stream into RHEL, they stop sending you future releases.

Again, that is it.

You can do whatever you want with what they already sent you. You have all the rights the GPL provides, even for software licenses as MIT, BSD, Apache, or otherwise. Nothing has been taken from you except access to FUTURE Red Hat product ( other than totally for free via CentOS Stream of course ).

Anyway, as you can see, they are the devil and we should hope their business fails. Because, why would we want a commercial successful company to keep contributing as much to Free Software and Open Source as they do?

15 more...

What I am most excited for in COSMIC is the promise of tiling in a full DE. I like the idea that you can switch back and forth.

I started trying it out a month or so ago. Still pretty incomplete. Promising though.

The fact that it may drive the Rust GUI ecosystem forward is exciting as well. I do not need to see everything re-written in Rust but it will be great if Rust is a realistic option for new app dev.

2 more...

I have defended Red Hat a fair bit over the past year. Their level of contribution to the community is a big reason why.

It is clear though that their prominence comes with a downside in the paternal and authoritative way that their employees present themselves. Design choices and priorities are made with an emphasis on what works for and what is required for Red Hat and the software they are going to ship. The impact on the wider community is not always considered and too often actively dismissed.

Even some of the Linux centrism perceived in Open Source may really be more about Red Hat. For example, GNOME insists on Systemd. Both projects are dominated by Red Hat. There have been problems with their stewardship of other projects.

To me, this is a much bigger problem than all the license hand-waving we saw before.

I used to be a huge Manjaro fan. There were many ways it let me down, some of which were just bad governance.

The biggest problem though is the AUR. Manjaro uses packages that are older than Arch. The AUR assumes the Arch packages. This, if your use the AUR with Manjaro, your system will break.

It is not a question of if Manjaro will break but when. Every ex-Manjaro user has the same story.

For me, EndeavourOS is everything that Manjaro should be.

15 more...

People are completely missing the point here. “Who made Red Hat the arbiter of when Xorg should end?”

I would say nobody but perhaps a better answer is all of us that have left the work of maintaining Xorg to Red Hat. All that Red Hat is deciding is when they are going to stop contributing. So little is done by others that, if Red Hat stops, Xorg is effectively done.

Others are of course free to step up. In fact, it may not be much work. Red Hat will still be doing most of the work as they will still be supporting Xwayland ( mostly the same code as Xorg ), libdrm, libinput, KMS, and other stuff that both Xorg and Wayland share. They just won’t be bundling it up, testing it, and releasing it as Xorg anymore.

We will see if anybody steps up.

2 more...

Well, this is better news than him being completely gone from driver dev which has been the situation for months now. He formally resigned.

Of course, this may have already been in the works and the reason he left to begin with. Either way, good to see him back.

Things seems about to be in a pretty good spot NVIDIA wise. I do not use any of their recent gear so I do not care directly. That said, it will be good to have NVIDIA working well with Wayland just to remove the substantial amount of noise NVIDIA issues add to that project.

I am not saying “This is the Year of the Linux Desktop”. That said, things languished below 2% for decades and now it has doubled in just over a year. With the state of Linux Gaming, I could see that happening again.

Also, if ChromeOS continues to converge, you could consider it a Linux distro at some point and it also has about 4% share.

Linux could exceed 10% share this year and be a clear second after Windows.

That leaves me wondering, what percentage do we have to hit before it really is “The Year of the Linux Desktop”. I have never had to wonder that before ( I mean, it obviously was not 3% ). Having to ask is a milestone in itself.

11 more...

I cannot wait until this lands in most distros. So much of the Wayland noise will go away.

If you game on Linux, you are almost certainly using it.

I love your comment. Without thinking, I instinctively agreed with you. Then I was like, “wait, why wouldn’t you just run that software on Windows?”. So now I am wondering if you are predicting such a mass migration off Windows that all these Windows apps become abandonware.

Probably though, part of what you are saying is that it may be easier to maintain compatibility with older software via WINE than on Windows itself.

64 years from now, when ReactOS comes out of beta, it will be great for that as well.

What an odd article. First, the author goes to great lengths to assert that “Linux IS UNIX” with pretty circumstantial evidence at best. Then, I guess to hide the fact the his point has not proved, he goes through the history of UNIX, I guess to re-enforce that Linux is just a small piece of the UNIX universe? Then, he chastises people working on Linux for not staying true to the UNIX philosophy and original design principles.

Questions like “are you sure this is a UNIX tool?” do not land with the weight he hopes as the answer os almost certainly “No. This is not a “UNIX” tool. It is not trying to be. Linux is not UNIX.”

The article seems to be mostly a complaint that Linux is not staying true enough to UNIX. The author does not really establish why that is a problem though.

There is an implication I guess that the point of POSIX and then we UNIX certification was to bring compatibility to the universe of diverging and incompatible Unices. While I agree that fragmentation works against commercial success, this is not a very strong point. Not only was the UNIX universe ( with its coherent design philosophy and open specifications ) completely dominated by Windows in the market but they were also completely displaced by Linux ( without the UNIX certification ).

Big companies found in Linux a platform that they could collaborate on. In practice, Linux is less fragmented and more ubiquitous than UNiX ever was before Linux. Critically, Linux has been able to evolve beyond the UNIX certification.

Linux does follow standards. There is POSIX of course. There is the LSB. There is freedesktop.org. There are others. There is also only one kernel.

Linux remains too fragmented on the desktop to displace Windows. To address that, a standard set of Linux standards are emerging: including Wayland, pipewire, and Flatpak.

Wayland is an evolution of the Linux desktop. It is a standard. There is a specification. There is a lot of collaboration around its evolution.

As for “other” systems, I would argue that compatibility with Linux will be more useful to them than compatibility with “UNIX”. I would expect other systems to adopt Wayland in time. It is already supported on systems like Haiku. FreeBSD is working on it as well.

Imagine how much less would get done overall and how many fewer people would participate if we did not let people work on what they wanted to work on.

The only choice left would be to contribute or not and more people would choose not to contribute (probably the choice you have made).

I realize that the major point of GIMP 3 is the port to GTK3. That said, I feel like colour spaces are what people have been waiting for and probably the most significant deficiency that keeps GIMP from being treated as a professional tool.

If they are really this close, why not set the GIMP 3 release date for when colour management is ready?

Non-destructive editing will be huge as well. GIMP 3 is really going to be a crazy leap forward. It is going to be amazing to finally get access to all this work that has been walled off for decades.

The bug situation sounds terrible. Honestly though, they should just get 3 out and then make bug fixing the number one job until it gets into better shape.

Not only is it a small team but right now there are basically two different projects ( 2 and 3 ). With only one code base, perhaps the pace of progress can improve.

Hopefully the move to GTK4 is easier.

2 more...

Depends on your point of view.

Their motivation was “we have a vision for our UX and GNOME won’t let us do it — so let’s write our own.”

It was only after deciding to write their own that they decided to write it in Rust.

They like Rust, but that is not what motivated them to make COSMIC.

6 more...

NVIDIA is likely to be stable on Wayland next month. If you wait for other people to ship you code, it will arrive with the fall releases ( eg. Ubuntu 24.10 ).

Xfce is targeting 4.20 for full Wayland support. If you use Xfce 4.20 on kernel 6.9, you may break the Internet.

6 more...