chameleon

@chameleon@fedia.io
0 Post – 37 Comments
Joined 3 months ago

i'm lizard

Moderation is handled by each instance's version of that community separately.

Reddit/Lemmy/etc communities differ from something like Tumblr/Cohost by also having per-community rules, and nobody has the time to moderate hundreds of communities according to their per-community rules.

It's relatively easy to keep an instance free of spam/overly blatant hate/etc, since that is a fairly common set of rules. But it's much harder to keep a "world news" style community being overran with US-centric posts, or a discussion community on a specific subject from being filled to the brim with memes, or posts that are only very vaguely adjacent. Without centralized per-community moderators, it would fall on general instance moderation to make decisions about whether a post about an Undertale hack fits in the Undertale community. That's probably going to go wrong more often than not.

You can have a website that is only moderated according to global rules with tags being a free-for-all, but you fundamentally end up building something along the lines of Tumblr or Cohost, which attracts a different audience, including those that know how to rules lawyer their way in such an environment; tagging 20 mediocre photos a day with #photography instead of just a good one, for example. With the end of Cohost approaching, I wouldn't be surprised if some tried to build that kinda thing, but it'd likely end up having a very different vibe.

I don't know if the Atari Lynx counts as non-major. Anything from Atari should probably count as major, the thing supposedly sold 2 million units, but I can't remember the last time I've seen anyone mention it and that's still less than 2% of the Game Boy's 110m+.

I got the original model as a hand-me-down towards the end of the 90s and I wasn't super fond of it. The thing looks and feels like a brick and ate batteries for breakfast, the internet says 5 hour battery life but I remember getting like 2. The "left-hand mode" is a cool concept but putting two pairs of A/B buttons on the device feels like something they could've done better. It had color, a couple of arcade ports were really great games and there was Chip's Challenge, but younger me got exhausted just using the damn thing.

Requiring agreement to some unspecified ever-changing terms of service in order to use the product you just bought, especially when use of such products is required in the modern world. Google and Apple in particular are more or less able to trivially deny any non-technical person access to smartphones and many things associated with them like access to mobile banking. Microsoft is heading that way with Windows requiring MS accounts, too, though they're not completely there yet.

https://bagder.github.io/emails/ has the email collection.

1 more...

Realistically, immutability wouldn't have made a difference. Definition updates like this are generally not considered part of the provisioned OS (since they change somewhere around hourly) and would go into /var or the like, which is mutable persistent state on nearly every otherwise immutable OS. Snapshots like Timeshift are more likely to help.

1 more...

Eh, no. "I'm going to make things annoying for you until you give up" is literally something already happening, Titanfall and the like suffered from it hugely. "I'm going to steal your stuff and sell it" is a tale old as time, warez CDs used to be commonplace; it's generally avoided by giving people a way to buy your thing and giving people that bought the thing a way to access it. The situation where a third party profits off your game is more likely to happen if you don't release server binaries! For example, the WoW private/emulator server scene had a huge problem with people hoarding scripts, backend systems and bugfixes, which is one of the reasons hosted servers could get away with fairly extreme P2W.

And he seems to completely misunderstand what happens to IP when a studio shuts down. Whether it's bankruptcy or a planned closure, it will get sold off just like a laptop owned by the company would and the new owner of the rights can enforce on it if they think it's useful. Orphan works/"abandonware" can happen, just like they can to non-GaaS games and movies, but that's a horrible failing on part of the company.

It's absolutely not the case that nobody was thinking about computer power use. The Energy Star program had been around for around 15 years at that point and even had an EU-US agreement, and that was sitting alongside the EU's own energy program. Getting an 80Plus-certified power supply was already common advice to anyone custom-building a PC which was by far the primary group of users doing Bitcoin mining before it had any kind of mainstream attention. And the original Bitcoin PDF includes the phrase "In our case, it is CPU time and electricity that is expended.", despite not going in-depth (it doesn't go in-depth on anything).

The late 00s weren't the late 90s where the most common OS in use did not support CPU idle without third party tooling hacking it in.

Eh. I've been on the receiving end of one of those inboxes and the spam is absolutely, utterly unbearable. Coming up with a better system than a publicly listed email address is on Google at this point, because there is no reasonable way to provide support when you need a spam filter tuned up to such a level that all legitimate mail also ends up in spam.

Company offering new-age antivirus solutions, which is to say that instead of being mostly signature-based, it tries to look at application behavior instead. If Word was exploited because some user opened not_a_virus_please_open.docx from their spam folder, Word might be exploited and end up running some malware that tries to encrypt the entire drive. It's supposed to sniff out that 1. Word normally opens and saves like one document at a time and 2. some unknown program is being overly active. And so it should stop that and ring some very loud alarm bells at the IT department.

Basically they doubled down on the heuristics-based detection and by that, they claim to be able to recognize and stop all kinds of new malware that they haven't seen yet. My experience is that they're always the outlier on the top-end of false positives in business AV tests (eg AV-Comparatives Q2 2024) and their advantage has mostly disappeared since every AV has implemented that kind of behavior-based detection nowadays.

My suggestion is to use system management tools like Foreman. It has a "content views" mechanism that can do more or less what you want. There's a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that's going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let's say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren't currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don't know if that's still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).

Elixir, or Gleam/pure Erlang/some other Erlang VM language. I think Erlang is extremely cool and I've enjoyed the little time I spent with Elixir. I also have absolutely no use case to make proper use of it.

Gonna add a dissenting "maybe but not really". YT is really aggressive on this kinda stuff lately and the situation is changing month by month. YT has multiple ways of flagging your IP as potentially problematic and as soon as you get flagged you're going to end up having to run quite an annoying mess of scripts that may or may not last in the long term. There's some instructions in a stickied issue on the Invidious repo.

In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I'd check to see if amdgpu ends up being loaded correctly via lsmod | grep amdgpu and just a general journalctl -b 0 | grep amdgpu to see if there's any obvious failures there. Chances are that even if it's not amdgpu, the real failure is in the journal somewhere.

Could be a wrong setting of hardware.enableRedistributableFirmware (should be true) or the new-ish hardware.amdgpu.initrd.enable (can be either really but either true or false might be more or less reliable on your system).

Most paid certs aren't worth much anyway. Payment and delivery info for DV certs isn't validated by anyone, it's literally the same concept as Let's Encrypt. OV and EV are the only ones that theoretically have any value, but nobody is using those ever since they got rid of the URL bar labeling; even Amazon is on DV nowadays.

1 more...

The main reason many sub-communities are stuck on Telegram (and Discord) are the public group chat/broadcast channel related features. Signal still has a 1000 member group size limit, which is more than enough for a "group DM" but mostly useless for groups with publicly posted invite links. Those same groups would also much rather have functional scrollback/search on join instead of encryption.

It depends on if you can feasibly implement compatibility layers for large parts of the "required" but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there's a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.

It's a ton of work overall but there's room to lift enough already existing stuff from Linux to get the ball rolling.

2 more...

qalculate. It's a calculator. A good one, though. You can put in 2 * x = 5.5 or 100 inches to meters and get an answer, it loads fast, it keeps history, the arrow keys work and it has all the fancy scientific buttons you'd ever want too.

Did a physical-to-virtual-to-physical conversion to upgrade and unbreak a webserver that had been messed up by simultaneously installing packages from Debian and Ubuntu.

Needed to write a syntax highlighter for VB.Net but I couldn't find any weirdly written edge cases online, so I had to make some myself.

Looking at the slides in the original Japanese source, this tooling also has a whole lot of analysis options and can pull/push game data/positioning both to and from a real Switch or something along those lines. Integrating that much custom features into an off-the-shelf tool would probably take just as long.

vim has better default keybindings/commands that allow for less movement of your hands. Nowadays, in reasonably current versions of nano, that's mostly it. The main difference is nano is somewhat usable but extremely inefficient unless you learn it, while vim forces you to learn it to get anything done at all, which also pushes people to spend a bit of time learning it in general.

If you're sure of the numbers you're using, vim's ability to repeat commands is also helpful. In practice I find that it's really hard to make use of them beyond low numbers, where nano can still achieve things in similar amounts of keypresses. Eg something to delete 3 words like 3dwi can be done similar with a sequence like Alt-A ^→ ^→ ^→ ^K in nano. Make it 20 words and nano is going to be a lot slower, but that's quite an uncommon action.

But the practice is that nano users don't spend time learning any of that and just hold delete until the words are gone, which takes forever. Everyone that can do basics in vim quickly learns that you can dw words away and make it 3dw to delete 3 of them. The default, easiest to use & access tool for any given situation gets blamed not just for its flaws, but also for the users that don't want to spend time learning any tool.

2 more...

Or 800€...

This whole thing is shaping up to be the PS3's "five hundred and ninety nine us dollars" version 2.

Easiest way would be to use borg create --read-special --chunker-params fixed,4194304 '/home/user/sdcardbackup::{now}' /dev/sdX (which I copied from the examples in the documentation). I'm not sure if Vorta has a way to activate --read-special but I suspect not; you can most likely still use it to make the repo and manage archives inside of it though.

Backing up from a command/stdin might also be relevant as an alternative, since that lets you back up more or less anything.

It's a problem in the Secure Boot chain, every system is affected by any vulnerability in any past, present or future bootloader that that system currently trusts. Even if it's an OS you aren't using, an attacker could "just" install that vulnerable bootloader.

That said, MS had also been patching their own CVE-2023-24932 / CVE-2024-38058, and disabled the fix for that in this update due to widespread issues with it. I don't think anyone knows what they're doing anymore.

There's been an exFAT driver in the kernel for a couple of years now (merged after Microsoft's patent pact added ExFAT), it works fine. Same driver gets used on Android for SD card support.

That's because they had a lot of people "buying the dip". CS is in a very similar position to SolarWinds during their 2020 security slipup. The extent of managerial issues there should've been unforgivable but unfortunately they got away with it and are doing just fine nowadays.

Pretty much every form of these scams is some kind of advance fee fraud. Two more possible avenues:

  • "Upgrade to a business account". They send you an email purporting to be from the payment provider you used saying you need to upgrade to business to receive a payment that large, and the upgrade page is a fake website run by the scammer that asks for a "refundable deposit" or the like (with a little helping of credit card fraud and of course a business account will require all kinds of personal info useful for identity theft too).
  • "But I want it as an NFT" was popular for a bit, they want you to "pre-pay the minting fee but it's ok I'll add it to your payment" and then they disappear. But they want it on a website ran by them and the moment you put the crypto in they disappear. Not sure this scam is popular nowadays because NFT screams scam to just about everyone for a lot of different reasons. But "rich guy spends $5000 on dumbass NFT" was a legitimate genre of news for a little moment.

It's all preying on someone that thinks they got an easy paycheck for work that they've already done, on a populace of artists that could really use said paycheck to pay for food and are thus willing to overlook weirdness or principles. They also tend to pick on newer and younger artists that haven't quite figured out how to run a business yet, hoping that they haven't heard of scams specifically targeted to their sector.

For that card, you probably have to set the radeon.si_support=0 amdgpu.si_support=1 kernel options to allow amdgpu to work. I don't have a TrueNAS system laying around so I don't know what the idiomatic way to change them is.

Using amdgpu on that card has been considered experimental ever since it was added like 6 years ago, and nobody has invested any real efforts to stabilize it. It's entirely possible that amdgpu on that card is simply never gonna work. But yeah I think the radeon driver isn't really fully functional anymore either, so I guess it's worth a shot...

2 more...

Basically yes. Rancher Desktop sets up K3s in a VM and gives you a kubectl, docker and a few other binaries preconfigured to talk to that VM. K3s is just a lightweight all-in-one Kubernetes distro that's relatively easy to set up (of course, you still have to learn Kubernetes so it's not really easy, just skips the cluster setup).

1 more...

My dotfiles aren't distro-specific because they're symlinks into a git repo (or tarball) + a homegrown shell script to make them, and that's about the end of it.

My NixOS configuration is split between must-have CLI tools/nice-to-have CLI tools/hardware-related CLI tools/GUI tools and functions as a suitable reference for non-Nix distros, even having a few comments on what the package names are elsewhere, but installation is ultimately still manual.

Personally, I do believe that rootless Docker/Podman have a strong enough security boundary for personal/individual self-hosting where you have decent trust in the software you're running. Linux privilege escalation and container escape exploits fetch decent amounts of money on the exploit market, and nobody's gonna waste them on some people running software ending in *arr when Zerodium will pay five figures for a local privilege escalation or container escape. If you're running a business or you might be targeted for whatever reason (journalist or whatever) then that doesn't apply.

If you want more security, there are container runtimes that do cooler security stuff under the hood, like Firecracker/Kata Containers implementing a managed VM, or Google's gVisor which very strongly intercepts kernel syscalls and essentially reimplements Linux in userspace. Those are used by AWS and Google Cloud respectively. You can integrate those into Docker, though not all networking/etc options are supported.

Probably an anti-piracy thing. It's pretty common in the console hacking scene for only specific versions to be vulnerable, or only have exploits released for a specific set of versions. People can get around it by looking for games released with specific updates on the disc/cart but it's a pain.

You can't pretend an open port is closed, because an open port is really just a service that's listening. You can't pretend-close it and still have that service work. The only thing you can do is firewalling off the entire service, but presumably, any competent distro will firewall off all services by default and any service listening publicly is doing so for a good reason.

I guess it comes down to whether they feel like it's worth obfuscating port scan data. If you deploy that across all of your network then you make things just a little bit more annoying for attackers. It's a tiny bit of obfuscation that doesn't really matter, but I guess plenty of security teams need every win they can get, as management is always demanding that you do more even after you've done everything that's actually useful.

1 more...

For the debugging thing on Linux, the major tunable is kernel.yama.ptrace_scope.

.eu has custom rules for whois. You're not allowed to use privacy/proxy services for anything other than the mandatory publicly shown email field, but for domains registered by an individual, that email field and the user's preferred language are the only things displayed. They've had those rules even prior to GDPR.

NieR Automata, for basically the same reasons. Hard mode is filled with instakills everywhere and is really just a damage multiplier, so you have to be the right kind of person for that. If you're not, Normal is probably already fairly easy because of all the auto-heals, but the pacing can be a bit slow for something where most enemies aren't dangerous. Might as well play Easy and play for the story.

Aside from all of the problems with the game itself, I think they must've had one of the most unfortunate launch moments. Hero shooters had been pretty much on the downturn and then just before they launched, Deadlock went public and suckered quite a lot of the hero shooter audience into playing a full-on MOBA/FPS hybrid. And Deadlock is very quietly breaking all kinds of silly records for what's technically an invite-only alpha (currently #8 on Steam's most played with 137k concurrent players).