digdilem

@digdilem@lemmy.ml
1 Post – 103 Comments
Joined 12 months ago

No - it's the kernel image - the actual operating system, rather than a service that runs on top of it.

If you just want to restart your ssh service after updating the packages, then "systemctl restart sshd" is all that's needed, although you should probably reboot whenever the package manager suggests as a general good habit.

Well, that's this afternoon planned then.

Because choosing a distro to begin with isn't easy. Ask ten people and you'll get eleven suggestions.

Remember when Word and Excel Autosave did what you expected it to?

I stand corrected that Redhat are no longer publically traded - I was misled by stock prices showing prices in months, and not including the year.

But that muddies your point even further, doesn't it? We can't see RHEL's value, nor even Redhat's. (And you did mix them up!)

Rocky is only comparable to Debian in terms of the licencing model, but IANAL. Both are owned by a non-profit organisation that can't be bought.

Would Rocky survive? Nobody knows - but that's why I said I think Rocky and Alma will pool resources with Fedora in the interests of all. R&A could just rebuild downstream of Fedora and invent their own release cycle, so they may do that.

When there is a war, there are war crimes - it's not surprising, it's not new and it's not special. Every single time, regardless of nationality, race, creed, invader or defender. Every single time. You give a lot of people guns, teach them to de-humanise the enemy and then put them through unimaginable stresses, it's inevitable that some will do bad things. Those who orchestrate such actions and trigger events like this know, accept and want these atrocoties to achieve their own ends.

I respect Paul Biggar for having an opinion and writing a well researched and unimpeachable personal blog about it. Why should any of us who hold feelings have to suppress them?

It's sad that he's become yet another victim of this unwinnable war, it's even sadder that he won't be the last.

Try it in enterprise where you have automated systems that deploy alert sensors and they instantly go off because each mount is 100% full.

Here's one that annoyed me this week. Juniper - the enterprise router people - require you to have an account to do their training. That's a web account that won't let you use more than 20 chars in your password, and won't let you paste a password.

Not 2fa, I'll grant you, but it's from the same bucket of dumb insecure shit that you're talking about.

4 more...

I think there's a core difference between loot boxes, which is out and out gambling, and gameplay. Both can be addictive, but they have very different consequences.

Gameplay addiction steals your time and maybe your social life, but that's it.

Gambling addiction also steals your money. And when that's gone, drives you to extremes trying to find more.

The way I help, as a Sysadmin, is primarily by using foss software in my job and feeding back with bug reports, issues and so on. I've raised several hundred issues on Github this way, and try to do them concisely, accurately and with as much relevant information as I can.

Anyone else find themselves singing this headline to the tune of The House of the Rising Sun?

2 more...

Getting fed up strimming our 4 acre, very steep field.

I looked at remote control mowers. At the time they were all well over £6k, so I thought I'd try building one. Well, I've done it and it works well, but it's taken three years and cost over a grand so far in parts.

6 more...

Ever read some of the microsoft forums? Just as many people seeking help there - the only difference is we don't have an over eager paid employee replying with scripted answers which don't help.

Linux is as simple or as complicated as you want it to be. Most of the mainstream distros "just work" on most hardware. I've installed Mint, Rocky, Ubuntu and Debian on laptops and desktops for relatives, including those who aren't remotely technically gifted. It was as easy/easier as Windows to install, set up and get running. The users are happy - they can use cheaper hardware (and don't need to upgrade a perfectly good laptop for Windows 11) and are entirely free of software costs and subscriptions. Everything works and things don't break - just like Windows and Macs. Most people just want their computer to turn on and let them run stuff. All three do that equally as well.

I've also installed linux on hardware clusters costing hundreds of thousands of pounds and that definitely wasn't a simple or quick process, but that's the nature of the task. Actually, installing the base os was probably the easiest part. Windows just isn't an option for that.

You ask a fair question - you're not unique in your viewpoint and that's probably hampered takeup more than anything else. What makes you a bit better than most is that you actually ask the question and appear to be open to the answers.

Because it was also the best show of 2023?

1 more...

I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?

The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.

Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)

It's good we're asking these questions. None of them are new, but the importance is ever increasing.

1 more...

That's great optics.

Not sure how workable it is to define how you would define "confidential information" without having already viewed the content. But the whole thing isn't very clever on a technical level anyway. Technically competent people will always find a way around such censorship.

This is a common thing one needs to do. Not all linux gui tools are perfect, and some calculate number differently (1000 vs 1024 soon mounts up to big differences). Also, if you're running as a user, you're not going to be seeing all the files.

Here's how I do it as a sysadmin:

As root, run:

du /* -shc |sort -h

"disk usage for all files in root, displaying a summary instead of listing all sub-files, and human-readable numbers, with a total. Then sort the results so that the largest are at the bottom"

Takes a while (many minutes, up to hours or days if you've slow disks, many files or remote filesystems) to run on most systems and there's no output until it finishes because it's piping to sort. You can speed it up by omitting the "|sort -h" bit, and you'll get summaries when each top level dir is checked, but you won't have a nice sorted output.

You'll probably get some permission errors when it goes through /proc or /dev

You can be more targetted by picking some of the common places, like /var - here's mine from a debian system, takes a couple of seconds. I'll often start with /var as it's a common place for systems to start filling up along with /home.

root@scrofula:~# du /var/* -shc |sort -h
0       /var/lock
0       /var/run
4.0K    /var/local
4.0K    /var/mail
4.0K    /var/opt
168K    /var/tmp
4.1M    /var/spool
5.5M    /var/backups
781M    /var/log
787M    /var/cache
8.3G    /var/www
36G     /var/lib
46G     total

Here we can see /var/lib has a lot of stuff in it, so we can look into that with du /var/lib/* -shc|sort -h - it turns out mine has some big databases in /var/lib/mysql and a bunch of docker stuff in /var/lib/docker, not surprising.

Sometimes you just won't be able to tally what you're seeing with what you're using. Often that might be due to a locked file having been deleted or truncated, but the lock's still preventing the OS from seeing the recovered space. That generally sorts itself out with various timeouts, but you can try and find it with lsof, or if the machine isn't doing much, a quick reboot.

2 more...

And it was a good design - it's universal (aha) adoption proves that.

Those of us old enough to remember the pain of using 9 and 25 pin serial leads and having to manually set baud rate and protocols, along with LPT and external SCSI and manufacturer specific sockets probably agree this was a problem that needed solving, and USB did do that.

I like the energy, but this doesn't qualify as "lesser known"

2 more...

I feel really bad for everyone involved - customers and staff. The human cost in this is huge.

Yes, there's a lot of criticism of backup strategies here, but I bet most of us who deal with this professionally have knowledge of systems that would also be vulnerable to malicious attack, and that's only the shortcomings we know about. Audits and pentesting are great, but not infallable and one tiny mistake can expose everything. If we were all as good as we think we are, ransomware wouldn't be a thing.

htop on our vms and clusters, because it's in all the repos, it's fast, it's configurable by a deployable config file, it's very clearly laid out and it does everything I need. I definitely would not call it bloated in any way.

My config includes network and i/o traffic stats, and details cpu load type - this in particular makes iowait very easy to spot when finding out why something's racking up big sysloads. Plus, it looks very impressive on a machine with 80 cores...

My brain can't parse top's output very well for anything other than looking for the highest cpu process.

But - ymmv. Everyone has a preference and we have lots of choice, it doesn't make one thing better or worse than another.

We are writing to inform you that we have discovered two Home Assistant integration plug-ins developed by you ( https://github.com/Andre0512/hon and https://github.com/Andre0512/pyhOn ) that are in violation of our terms of service

Did the guy explicitly agree to their Terms of service? If not, how can he be in breach of them?

cease and desist all illegal activities

What illegal activities exactly?

Feels like unenforceable scare tactics, but IANAL.

Because it triggers the tribal instinct, innit.

"I use A, so A must be better than B. Otherwise I'm wrong, and I don't like that."

The reality, of course, is that there is no "Best distro" for all use cases, and personal choice is absolutely a qualifier in defining those use cases. If your personal requirement is for a neon pink desktop and rather aged theming aimed at little girls, then you've absolutely chosen "The best distro" for you and don't let anyone tell you differently.

Reading that made me sad, angry and scared. Great article, but terrifying.

Depends on what lists you add to pihole (or adguard).

The default lists for both are primarily advert or tracking related, and very safe to keep. The only time I whitelist is when I'm following some kind of shopping deal that uses a tracker. Most linux related things are free from that.

Some nice evil ideas there!

Only one of the ~250 linux machines I maintain has a gui.

Others have answered your question - but it may be worth pointing out the obvious - backups. Annoyances such as you describe are much less of a stress if you know you're protected - not just against accidental erasure, but malicious damage and technical failure.

Some people think it's a lot of bother to do backups, but it is very easily automated with any of the very good free tools around (backup-manager, someone's mentioned timeshift, and about a million others). A little time spent planning decent backups now will pay you back one day in spades, it's a genuine investment of time. And once set up, with some basic monitoring to ensure they're working and the odd manual check once in a blue moon, you'll never be in this position again. Linux comes ahead here in that the performance impact from automated backups can be prioritised not to impact your main usage, if the machine isn't on all the time.

2 more...

Mate, why wait?,

Run to Linux, don't just run from Windows.

I would not encourage anyone to join the EL universe as I don't consider it as stable as others.

TLDR; Redhat's being absorbed into IBM and they don't care about RHEL. RHEL (in my view) is dying a slow death. Without RHEL, there is no Fedora or Centos Stream. There'd also be no Rocky or Alma, as things currently stand.

(Although if that happened, I'd not be surprised if the users of Fedora merged with Rocky and Alma in some form of new and fully independent distro - we've already seen how well such disasters can be worked around)

Longer reasoning: Redhat, in my view, have made some unpredictable and frankly terrible decisions over the past few years with RHEL which have caused a great deal of concern in the business sector about its stability as a product. (Prematurely ending Centos 8 six years early, paywalling the source code, and more recent anti-rebuilder steps. They also treated the community team working for Centos appallingly throughout these leading to many resignations.) Further more, these were communicated without warning or consultation and have sometimes come across as petty and spiteful, rather than as professional business decisions.

IBM bought Redhat shortly before this happened, mostly for its cloud services. It seems from the outside that RHEL is being squeezed. There have been two major rounds of layoffs. In all, this paints a picture of a company that is in decline and we've seen a reduction in contributions to the excellent work done by Redhat in the foss world. IBM have a long history of buying and absorbing companies - I don't see why Redhat would be any different and RHEL doesn't make enough money.

Our company is moving away from EL and I know of several others who are doing so. We're all choosing Debian.

12 more...

software developers are criticizing Microsoft and GitHub for taking down some of the affected code repositories

Surely it's sensible of Github to take down malicious code? It's not just honest, hardworking people trying to make sense of this that have eyes, it's others looking for inspiration from what appears to be a sophisticated and very dangerous supply chain attack.

Nope. Two months of not using reddit. Not doom scrolling, not feeling that heart rate lift when I see I've had a bunch of new replies and wondering whether I said something wonderful, or something dumb and a hundred people are now calling me an arsehole.

I do get your point about some reddit communities being genuinely nice places with great content, and if it was just that I'd still be there. But my mental health is better through having left it. Also, having read the posts about Reddit's attitude to its users and supporters during the Apollo posts made me realise just how toxically they view us. Fuck them, they can go to hell without me.

Maybe? It feels like the kind of stupid that you really need a human to half-ass it to achieve this thoroughly though.

Fail2ban is something I've used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:

The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that's a chunk of overhead for a home system. (I've done that before, so it is possible, just extra time)

And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I've other containers that do display it, and it can obviously be done, but I'm not clear yet why it's inconsistent. Without that, there's no blocking.

And... You can use the cloudflare IP to block IPs, but there's a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don't think there's automatic expiry, so I'd need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)

It's probably where I should go next.

And yes - you're right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.

1 more...

Doh - another example of my muddled thinking.

Fail2ban will work directly on haproxy's log, no need to read the web logs from containers at all. Much simpler and better.

Nah, changing email address is the hardest of services. Gmail has been my main address for about 15 years. Every single online account I have uses it, and that's in the high hundreds. Maybe if you'd used your own domain with gmail when you started you could hop around some, but not so many people do that.

You know one of the easiest and safest ways of switching base os? Replace the ssd (or m.2).

They're ridiculously cheap now and, after copying the installer files to a usb stick, unplug your old ssd and plug in the new one. Then you can go back fully if it doesn't work out.

the sales person at GitLab ghosted me on 3 consecutive calls that we set up to discuss our needs).

I'm guessing they looked at your company and decided you weren't worth enough to them.

We found Gitlab's pricing to be, frankly, ridiculous for the number of seats we have. Shame, the product is nice, just the sales team and pricing structure blows goats.

It's actually 250 euros for the top tier (267 $us)

I mean, seriously, what the actual fucking fuck?