https://bagder.github.io/emails/ has the email collection.
i'm lizard
https://bagder.github.io/emails/ has the email collection.
Realistically, immutability wouldn't have made a difference. Definition updates like this are generally not considered part of the provisioned OS (since they change somewhere around hourly) and would go into /var
or the like, which is mutable persistent state on nearly every otherwise immutable OS. Snapshots like Timeshift are more likely to help.
Eh, no. "I'm going to make things annoying for you until you give up" is literally something already happening, Titanfall and the like suffered from it hugely. "I'm going to steal your stuff and sell it" is a tale old as time, warez CDs used to be commonplace; it's generally avoided by giving people a way to buy your thing and giving people that bought the thing a way to access it. The situation where a third party profits off your game is more likely to happen if you don't release server binaries! For example, the WoW private/emulator server scene had a huge problem with people hoarding scripts, backend systems and bugfixes, which is one of the reasons hosted servers could get away with fairly extreme P2W.
And he seems to completely misunderstand what happens to IP when a studio shuts down. Whether it's bankruptcy or a planned closure, it will get sold off just like a laptop owned by the company would and the new owner of the rights can enforce on it if they think it's useful. Orphan works/"abandonware" can happen, just like they can to non-GaaS games and movies, but that's a horrible failing on part of the company.
It's absolutely not the case that nobody was thinking about computer power use. The Energy Star program had been around for around 15 years at that point and even had an EU-US agreement, and that was sitting alongside the EU's own energy program. Getting an 80Plus-certified power supply was already common advice to anyone custom-building a PC which was by far the primary group of users doing Bitcoin mining before it had any kind of mainstream attention. And the original Bitcoin PDF includes the phrase "In our case, it is CPU time and electricity that is expended.", despite not going in-depth (it doesn't go in-depth on anything).
The late 00s weren't the late 90s where the most common OS in use did not support CPU idle without third party tooling hacking it in.
Eh. I've been on the receiving end of one of those inboxes and the spam is absolutely, utterly unbearable. Coming up with a better system than a publicly listed email address is on Google at this point, because there is no reasonable way to provide support when you need a spam filter tuned up to such a level that all legitimate mail also ends up in spam.
Company offering new-age antivirus solutions, which is to say that instead of being mostly signature-based, it tries to look at application behavior instead. If Word was exploited because some user opened not_a_virus_please_open.docx from their spam folder, Word might be exploited and end up running some malware that tries to encrypt the entire drive. It's supposed to sniff out that 1. Word normally opens and saves like one document at a time and 2. some unknown program is being overly active. And so it should stop that and ring some very loud alarm bells at the IT department.
Basically they doubled down on the heuristics-based detection and by that, they claim to be able to recognize and stop all kinds of new malware that they haven't seen yet. My experience is that they're always the outlier on the top-end of false positives in business AV tests (eg AV-Comparatives Q2 2024) and their advantage has mostly disappeared since every AV has implemented that kind of behavior-based detection nowadays.
That already happens constantly and I'd consider this the consequence of it, rather than the cause. You can only issue so many vetoes before people no longer want to deal with you and would rather move on.
The recent week of Wayland news (including the proposal from a few hours ago to restate NACK policies) is starting to feel like the final attempt to right things before a hard fork of Wayland. I've been following wayland-protocols/devel/etc from the outside for a year or two and the vibes have been trending that way for a while.
My suggestion is to use system management tools like Foreman. It has a "content views" mechanism that can do more or less what you want. There's a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.
With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION
as downloaded from a mirror at any given moment and serve it to a particular server, that's going to end up with apt update && apt upgrade
installing those identical versions, provided that the actual package files in /pool
are still available. You can set up caching proxies for that.
I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let's say bookworm
) and their associated -updates
and -security
repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren't currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1
. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool
were served by apt-cacher-ng, but I don't know if that's still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).
Gonna add a dissenting "maybe but not really". YT is really aggressive on this kinda stuff lately and the situation is changing month by month. YT has multiple ways of flagging your IP as potentially problematic and as soon as you get flagged you're going to end up having to run quite an annoying mess of scripts that may or may not last in the long term. There's some instructions in a stickied issue on the Invidious repo.
I don't have Obsidian around, but this has been happening elsewhere lately too, almost certainly because of this underlying Electron issue: https://github.com/electron/electron/issues/43819
Unfortunately there's not much you can do about it. Electron decided to depend on functionality not yet in a released version, and that very interesting choice flows down to everything that updates their Electron on the regular.
Elixir, or Gleam/pure Erlang/some other Erlang VM language. I think Erlang is extremely cool and I've enjoyed the little time I spent with Elixir. I also have absolutely no use case to make proper use of it.
In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I'd check to see if amdgpu ends up being loaded correctly via lsmod | grep amdgpu
and just a general journalctl -b 0 | grep amdgpu
to see if there's any obvious failures there. Chances are that even if it's not amdgpu, the real failure is in the journal somewhere.
Could be a wrong setting of hardware.enableRedistributableFirmware
(should be true) or the new-ish hardware.amdgpu.initrd.enable
(can be either really but either true or false might be more or less reliable on your system).
The main reason many sub-communities are stuck on Telegram (and Discord) are the public group chat/broadcast channel related features. Signal still has a 1000 member group size limit, which is more than enough for a "group DM" but mostly useless for groups with publicly posted invite links. Those same groups would also much rather have functional scrollback/search on join instead of encryption.
Most paid certs aren't worth much anyway. Payment and delivery info for DV certs isn't validated by anyone, it's literally the same concept as Let's Encrypt. OV and EV are the only ones that theoretically have any value, but nobody is using those ever since they got rid of the URL bar labeling; even Amazon is on DV nowadays.
qalculate. It's a calculator. A good one, though. You can put in 2 * x = 5.5
or 100 inches to meters
and get an answer, it loads fast, it keeps history, the arrow keys work and it has all the fancy scientific buttons you'd ever want too.
It depends on if you can feasibly implement compatibility layers for large parts of the "required" but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there's a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.
It's a ton of work overall but there's room to lift enough already existing stuff from Linux to get the ball rolling.
Needed to write a syntax highlighter for VB.Net but I couldn't find any weirdly written edge cases online, so I had to make some myself.
Did a physical-to-virtual-to-physical conversion to upgrade and unbreak a webserver that had been messed up by simultaneously installing packages from Debian and Ubuntu.
It's not and I'm not sure how that article arrived at that conclusion. Their E2EE crypto is problematic homebrew crypto, but that's very, very different from being closed. The whole desktop client including the implementation of that crypto is fully open source and lives right on GitHub. Plenty of people have independently reviewed it and came back with a very iffy impression of the whole thing.
Really the only difference is that Telegram doesn't publish their backend, but the one Signal publishes is missing a couple of bits related to their "spam filter", which happens to take in the source & destination of messages and do anything it wants with them. That doesn't matter for either platform's E2EE properties in any case, since distrusting the server is the whole point of E2EE.
Easiest way would be to use borg create --read-special --chunker-params fixed,4194304 '/home/user/sdcardbackup::{now}' /dev/sdX
(which I copied from the examples in the documentation). I'm not sure if Vorta has a way to activate --read-special
but I suspect not; you can most likely still use it to make the repo and manage archives inside of it though.
Backing up from a command/stdin might also be relevant as an alternative, since that lets you back up more or less anything.
Or 800€...
This whole thing is shaping up to be the PS3's "five hundred and ninety nine us dollars" version 2.
vim has better default keybindings/commands that allow for less movement of your hands. Nowadays, in reasonably current versions of nano, that's mostly it. The main difference is nano is somewhat usable but extremely inefficient unless you learn it, while vim forces you to learn it to get anything done at all, which also pushes people to spend a bit of time learning it in general.
If you're sure of the numbers you're using, vim's ability to repeat commands is also helpful. In practice I find that it's really hard to make use of them beyond low numbers, where nano can still achieve things in similar amounts of keypresses. Eg something to delete 3 words like 3dwi
can be done similar with a sequence like Alt-A ^→ ^→ ^→ ^K
in nano. Make it 20 words and nano is going to be a lot slower, but that's quite an uncommon action.
But the practice is that nano users don't spend time learning any of that and just hold delete until the words are gone, which takes forever. Everyone that can do basics in vim quickly learns that you can dw
words away and make it 3dw
to delete 3 of them. The default, easiest to use & access tool for any given situation gets blamed not just for its flaws, but also for the users that don't want to spend time learning any tool.
mkinitfs
doesn't support running custom shell hooks. mkinitfs
is very, very, very bare-bones custom code and the whole features concept exists only to pull extra files and kernel modules into the initramfs, not for extra logic.
You'd either have to customize the init script itself (not impossible, it's 1000 lines) and pass -i
/set init=
in the .conf, or install Dracut/Booster instead (which should "just work" if you apk add
them, but I've had no need to do so).
Looking at the slides in the original Japanese source, this tooling also has a whole lot of analysis options and can pull/push game data/positioning both to and from a real Switch or something along those lines. Integrating that much custom features into an off-the-shelf tool would probably take just as long.
You'd be looking for /usr/share/mkinitfs/initramfs-init
. I've never customized that myself, but it looks like there's already some support for a keyfile if you look for KOPT_cryptroot
and check that block of code. That looks like it's mostly set up for a keyfile embedded into the initramfs, but I guess it should be possible to replace that code with something that grabs the keyfile off an USB drive.
I suppose you'd make a copy of it, put it somewhere in /etc or whatever and change the mkinitfs.conf
to point to it. init="/etc/whatever/myinitramfs-init"
should do the trick since the config file just gets sourced in. That said you're definitively heading into unknown territory here. It might be easier to just use Dracut or the like instead.
Pretty much every form of these scams is some kind of advance fee fraud. Two more possible avenues:
It's all preying on someone that thinks they got an easy paycheck for work that they've already done, on a populace of artists that could really use said paycheck to pay for food and are thus willing to overlook weirdness or principles. They also tend to pick on newer and younger artists that haven't quite figured out how to run a business yet, hoping that they haven't heard of scams specifically targeted to their sector.
There's been an exFAT driver in the kernel for a couple of years now (merged after Microsoft's patent pact added ExFAT), it works fine. Same driver gets used on Android for SD card support.
It's a problem in the Secure Boot chain, every system is affected by any vulnerability in any past, present or future bootloader that that system currently trusts. Even if it's an OS you aren't using, an attacker could "just" install that vulnerable bootloader.
That said, MS had also been patching their own CVE-2023-24932 / CVE-2024-38058, and disabled the fix for that in this update due to widespread issues with it. I don't think anyone knows what they're doing anymore.
That's because they had a lot of people "buying the dip". CS is in a very similar position to SolarWinds during their 2020 security slipup. The extent of managerial issues there should've been unforgivable but unfortunately they got away with it and are doing just fine nowadays.
Moderation is handled by each instance's version of that community separately.
Reddit/Lemmy/etc communities differ from something like Tumblr/Cohost by also having per-community rules, and nobody has the time to moderate hundreds of communities according to their per-community rules.
It's relatively easy to keep an instance free of spam/overly blatant hate/etc, since that is a fairly common set of rules. But it's much harder to keep a "world news" style community being overran with US-centric posts, or a discussion community on a specific subject from being filled to the brim with memes, or posts that are only very vaguely adjacent. Without centralized per-community moderators, it would fall on general instance moderation to make decisions about whether a post about an Undertale hack fits in the Undertale community. That's probably going to go wrong more often than not.
You can have a website that is only moderated according to global rules with tags being a free-for-all, but you fundamentally end up building something along the lines of Tumblr or Cohost, which attracts a different audience, including those that know how to rules lawyer their way in such an environment; tagging 20 mediocre photos a day with #photography
instead of just a good one, for example. With the end of Cohost approaching, I wouldn't be surprised if some tried to build that kinda thing, but it'd likely end up having a very different vibe.
It technically still exists in the game properties -> installed files tab, but it doesn't really work. The backup files you get require you to be online to meaningfully restore and will trigger a patch to the latest game version.
Practically speaking it's better to just make a copy of the game install directory manually, gives you a better chance of things working (even though most games require some kind of external tooling for that).
All of the cool development-related Nix things like pinning a project to known-good library versions (for regression tests or otherwise) don't really need you to run NixOS. If you like NixOS then it's a perfectly usable distro for development work, but all of the powers come from Nix itself, and that can be installed anywhere you feel comfortable with.
The only real pro of running full NixOS is that everything you work on will test a relatively uncommon *nix setup by its nature. Things like developer-only scripts with hardcoded #!/bin/bash
shebangs are more likely to break on NixOS than they would on a conventional Linux distro with Nix installed. That's something potentially worth fixing as it might also hurt the developer experience on *BSD/Mac systems.
My dotfiles aren't distro-specific because they're symlinks into a git repo (or tarball) + a homegrown shell script to make them, and that's about the end of it.
My NixOS configuration is split between must-have CLI tools/nice-to-have CLI tools/hardware-related CLI tools/GUI tools and functions as a suitable reference for non-Nix distros, even having a few comments on what the package names are elsewhere, but installation is ultimately still manual.
Basically yes. Rancher Desktop sets up K3s in a VM and gives you a kubectl
, docker
and a few other binaries preconfigured to talk to that VM. K3s is just a lightweight all-in-one Kubernetes distro that's relatively easy to set up (of course, you still have to learn Kubernetes so it's not really easy, just skips the cluster setup).
Digging into the GitLab & related discussions, the main takeaway I got is that FFmpeg's API supposedly meshes better with what Wine needs to provide to Windows code, simplifying things overall. GST is pretty heavy on asynchronous/background processing, which is normally something I'd consider good for media, but if the API you're expected to implement is synchronous then I guess it only adds complexity.
Personally, I do believe that rootless Docker/Podman have a strong enough security boundary for personal/individual self-hosting where you have decent trust in the software you're running. Linux privilege escalation and container escape exploits fetch decent amounts of money on the exploit market, and nobody's gonna waste them on some people running software ending in *arr when Zerodium will pay five figures for a local privilege escalation or container escape. If you're running a business or you might be targeted for whatever reason (journalist or whatever) then that doesn't apply.
If you want more security, there are container runtimes that do cooler security stuff under the hood, like Firecracker/Kata Containers implementing a managed VM, or Google's gVisor which very strongly intercepts kernel syscalls and essentially reimplements Linux in userspace. Those are used by AWS and Google Cloud respectively. You can integrate those into Docker, though not all networking/etc options are supported.
For that card, you probably have to set the radeon.si_support=0 amdgpu.si_support=1
kernel options to allow amdgpu to work. I don't have a TrueNAS system laying around so I don't know what the idiomatic way to change them is.
Using amdgpu on that card has been considered experimental ever since it was added like 6 years ago, and nobody has invested any real efforts to stabilize it. It's entirely possible that amdgpu on that card is simply never gonna work. But yeah I think the radeon driver isn't really fully functional anymore either, so I guess it's worth a shot...
Sorry, I've had a (self-imposed) busy week, but I have to admit, that also has me rather stumped. As far as I can tell, your second entry should work. If the device is visible in /dev/mapper under a name, it should be able to mount under that name.
The only thing I can think of is that some important module like the ext4 module might be missing somehow? You can get pretty confusing errors when that happens. Dracut is supposed to parse /etc/fstab
for everything needed to boot, and maybe that's not recognizing your root for some reason. dmesg
might have some useful info at the end after you try to mount it. If that's what's happening, you could try to add add_drivers+=" ext4 "
in your dracut.conf and regenerate it (the spaces are important!). But if that's not it, then I'm probably out of ideas now.
You can't pretend an open port is closed, because an open port is really just a service that's listening. You can't pretend-close it and still have that service work. The only thing you can do is firewalling off the entire service, but presumably, any competent distro will firewall off all services by default and any service listening publicly is doing so for a good reason.
I guess it comes down to whether they feel like it's worth obfuscating port scan data. If you deploy that across all of your network then you make things just a little bit more annoying for attackers. It's a tiny bit of obfuscation that doesn't really matter, but I guess plenty of security teams need every win they can get, as management is always demanding that you do more even after you've done everything that's actually useful.
Requiring agreement to some unspecified ever-changing terms of service in order to use the product you just bought, especially when use of such products is required in the modern world. Google and Apple in particular are more or less able to trivially deny any non-technical person access to smartphones and many things associated with them like access to mobile banking. Microsoft is heading that way with Windows requiring MS accounts, too, though they're not completely there yet.