rentar42

@rentar42@kbin.social
0 Post – 189 Comments
Joined 1 years ago

If the only thing I knew about a given law is that those three complained about it I would immediately and wholeheartedly support and endorse that law. It's probably awesome and badly needed.

14 more...

Billionaires don't "work". At least not in the sense that they get some amount of money that's in any way in relation to the value they create. They shuffle around money to do things for them and sometimes that makes them more money. Calling that "work" lessens the meaning of that word and gives them too much credit.

5 more...

The problem with your attitude is ...

No. That's your problem with my attitude.

"Free speech" absolutists don't convince me with their hypotheticals.

Believe it or not: absolute free speech is not the end goal and not as valuable as you all believe.

Forbidding some kind of speech can be okay.

Because not forbidding it creates an awful lot of very real and very current pain. Somehow the theoretical pain that a similar law could create is more important for your argument, than the real and avoidable pain thatthis law is attempting to prevent.

but e.g. American free speech would be nonexistent

And I say that the specific American flavor of free speech is not very valuable at all.

Without any text it's really hard to guess what you want and that's why you get so many different answers.

Do you want to

Note that I suspect you actually want the third one, in which case I suggest you avoid MediaWiki. Not because it's bad, but because it's almost certainly overkill for your use-case and there's way simpler, easier-to-setup-and-maintain systems with fewer moving parts out there.

19 more...

Get out of here with your silly US-centric idea of "absolute free speech". Pretty much every civilized country in the world has boundaries to what is considered acceptable.

And even the US does (though they are fewer than elsewhere, granted).

But for some reason the US has produced this myth that absolute freedom of speech (which it doesn't have) somehow is the best possible choice (which it isn't).

39 more...

He might actually believe that himself.

And I'd like to see him in jail as much as the next one, but is this really news? "Colossal narcissist brags about his achievements".

Good. 10 Billion $ of inheritance tax seems reasonable. Could be higher (we don't need billionaires), but it's a good start.

Screen prints (printed?)
Suddenly the Dungeon collapses!! - You die.. when the master process died unexpectedly.

I promise this isn't a generic anti-crypto rant, but rather a specific anti-crypto rant:

There are many projects in this space that try to replace what they perceive as flawed legal systems with perceived "perfect" (or at least better) digital, automated systems.

And I definitely understand that urge: there are many problems with various legal systems ranging from annoying (like being slow and very disparate around the world) to massive (biases, lack of access for those who need it most).

So aiming to improve that situation is understandable. And being pessimistic about the chances of fixing those systems with the "normal approaches" (i.e. politics) is equally understandable.

Where these projects usually break down though is that they generally lack an understanding of what makes legal systems so hard to get right: no one has found a reliable way to encode a non-trivial part of the law into something that a computer can decide reliably and without wrong decisions. (there are of course other difficulties, but this is the most lenient one for the current topic).

People with a technical background (which includes me) are often frustrated how laws and legal documents like licenses are at the same time both written in an arcane inaccessible language and also very much prone to interpretation. We assume, based on the languages we interact with, that a sufficiently complex language should allow a strict, formal interpretation of some truth value ("was this contract followed by both parties?").

But the reality is that contracts (just like most laws) are intentionally written with some subjective language to both account for real world deviations and avoid loopholes.

It's incredibly easy for a law to apply when it's not meant to (or the opposite: to present a law as not being meant to apply to a certain situation when the authors were very aware of the implications) or to not apply due to some technicality.

And for all the wrong in legal systems that exists we have not yet found abetter way to solve this than (hopefully neutral) arbitrators that interpret the text and underlying intentions.

And all the crypto schemes categorically decline that: their stated goal is to not have a human in the loop anywhere. That would be fine if they also solved the above problems in some other way, but none that I know of even attempt to do that. They simply pretend that perfect, decideable contracts are possible (even easy!) and never unfair.

Whether that error is based on ignorance or on something more sinister is up to the reader to decide.

3 more...

In the immortal words of Jake the Dog:

Dude, suckin’ at something is the first step to being sorta good at something.

We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!

3 more...

I like to imagine that whenever there was a particularly slow day or someone was particularly stressed, they just opened the prepared article and tweaked and improved it a bit ... it's probably the collaborative work of many people over many years.

I know at least one FAANG company that had (has? don't know) a policy of not using any hardware that was ever used in travel to China. If you had to go there on a business trip, you got a loaner laptop (and got your account severely restricted) and when you got back they wiped and discarded the laptop.

without trusting anyone.

Well, except of course the entity that gave you the hardware. And the entity that preinstalled and/or gave you the OS image. And that that entity wasn't fooled into including malicious code in some roundabout way.

like it or not, there's currently no real way to use any significant amount of computing power without trusting someone. And usually several hundreds/thousands of someones.

The best you can hope for is to focus the trust into a small number of entities that have it in their own self interest to prove worthy of that trust.

Always has been.

The "ham-fisted" assassinations have always been about just the tiniest sliver of deniability while definitely sending the message "we can reach you" and not making a secret about who "we" is.

I'm sorry that my attempt to find out what you want to be able to provide useful help annoyed you.

6 more...

I find that to be a tricky thought experiment.

Can you run a country in a way that peppers the general population with "all is well" propaganda thoroughly and still manages to capture all the necessary information to make properly informed decisions at some high level?

You'd need some "elite" layer of people who get to see unfiltered, honest information, but how would you even collect that information if even local, low-level government actors are subject to (and meant to believe) the propaganda?

Basically what I'm asking is: if I ignored moral concerns, is there a world where keeping the majority ignorant could actually lead to more efficiency than letting knowledge of the state of things spread?

7 more...

Except it does matter if it's on Twitter or on a lesser-known platform. Propaganda works when it is widely publicized and doesn't work as well when it isn't.

Twitter still has a responsibility before the law to deal with this kind of stuff and it doesn't follow that.

Not OP, but as someone using Ubuntu LTS releases on several systems, I can answer my reason: Having the latest & greatest release of all software available is neat, but sometimes the stability of knowing "nothing on my system changes in any significant way until I ask it to upgrade to the next LTS" is just more valuable.

My primary example is my work laptop: I use a fairly fixed set of tools and for the few places where I need up-to-date ones I can install them manually (they are often proprietary and/or not-quite established tools that aren't available in most distros anyway).

A similar situation exists on my primary homelab server: it's running Debian because all the "services" are running in docker containers anyway, so the primary job of the OS is to do its job and stay out of my way. Upgrading various system components at essentially random times runs counter to that goal.

1 more...

Don't you have it exactly the wrong way around?

Also, since the hate itself is already irrational, any additional "quirks" in that hate shouldn't be surprising anyone.

"Use vim in SSH" is not a great answer to asking for a convenient way to edit a single file, because it requires understanding multiple somewhat-complex pieces of technology that OP might not be familiar with and have a reasonably steep learning curve.

But I'd still like to explain why it pops up so much. And the short version is very simple: versatility.

Once you've learned how to SSH into your server you can do a lot more than just edit a file. You can download files with curl directly to your server, you can move around files, copy them, install new software, set up an entire new docker container, update the system, reboot the system and many more things.

So while there's definitely easier-to-use solutions to the one singular task of editing a specific file on the server, the "learn to SSH and use a shell" approach opens up a lot more options in the future.

So if in 5 weeks you need to reboot the machine, but your web-based-file-editing tool doesn't support that option, you'll have to search for a new solution. But if you had learned how to use the shell then a simple "how do I reboot linux from the shell" search will be all that you need.

Also: while many people like using vim, for a beginner in text based remote management I'd recommend something simpler like nano.

3 more...

There's lots of very good approaches in the comments.

But I'd like to play the devil's advocate: how many of you have actually recovered from a disaster that way? Ideally as a test, of course.

A backup system that has never done a restore operations must be assumed to be broken. similar logic should be applied to disaster recovery.

And no: I use Ansible/Docker combined approach that I'm reasonably sure could quite easily recover most stuff, but I've not yet fully rebuilt from just that yet.

4 more...

Yes, but those minor traces are easy enough to remove, especially if you don't care about being "ceritified" by Google (i.e. are not planning to run the Google services).

3 more...

Why do you think it'll overtake electric cars? The energy efficiency of hydrogen cars is significantly worse, as they introduce some extra steps in pipeline of energy-generation -> movement.

The only major advantage they have is "ICE-like" fuelling, which has a bunch of major caveats attached to it (as in: it's nowhere near as simple a system as ICE refuelling. Everything from generation, to transport to getting-it-in-the-car is way more complex and thus expensive and error-prone).

3 more...

UNEXPECTED ITEM IN THE BAGGING AREA

UNEXPECTED ITEM IN THE BAGGING AREA

UNEXPECTED ITEM IN THE BAGGING AREA

2 more...

The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that's not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won't be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I've "just" gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.

11 more...

Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A "Guidelines for Media Sanitation" it states:

Overwrite media by using organizationally approved software and perform verification on the
overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
such as all zeros. Multiple write passes or more complex values may optionally be used.

This is the standard that pretty much birthed the "multiple passes" idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).

I really like it and it clearly passed the code review without any issues. But I find the diagnostic messages a bit lacking, it can be hard to debug.

3 more...

I honestly have no real opinion on this (yet), as I don't know if that would help or not.

But 90% of all policy proposals from the UK end up being terrible ideas, so I'll just assume this is stupid.

First: love that that's a thing, but I find the blog post hilarious:

We believe this choice must include the one to migrate your data to another cloud provider or on-premises. That’s why, starting today, we’re waiving data transfer out to the internet (DTO) charges when you want to move outside of AWS.

and later

We believe in customer choice, including the choice to move your data out of AWS. The waiver on data transfer out to the internet charges also follows the direction set by the European Data Act and is available to all AWS customers around the world and from any AWS Region.

But sure: it's out of their love for customer choice that they offer this now. The fact that it also fulfills the requirements by the EDA is purely coincidental, they would have done it for sure.

Remember folks: regulation works. Sometimes corporations need the state(s) to force their hand to do the right thing.

So these are people that sell access to (presumably media-filled) existing Plex installations?

That does seem like a problematic thing to do and I understand why Plex wants to shut that down.

But surely their tons of online-integrations and user-account-requirements gives them other tools at their disposal than outright blocking a major VPS provider, that seems insane.

3 more...

Just a little addition: the majority of things that people associate with Linux as per your first item are actually shared by many/most Unix-like OS and are defined via the various POSIX standards.

That's not to say that Linux doesn't have it's own peculiarities, but they are fewer than many people think.

He's already #blessed since 2020, by the catholic church. That's a precondition to becoming a saint.

There is no pace at which he could have gone that wouldn't have created some backlash.

If he had waited a hundred more years, there would still have been backlash.

The catholic church is an organization that is built around stability first and foremost. It changes, of course, but very, very slowly. That is very much by design.

That design has helped them "survive" for as long as they did, but it might end up being what eventually leads them into irrelevancy.

And I guarantee that the majority of it will simply copy todays Mickey Mouse as opposed to the one in steamboat Willie.

But that version isn't entering the public domain any time soon.

Somewhere a monkey paw finger curls and you're moved to the timeline where the world is in a nuclear winter ...

1 more...

Like many other security mechanisms VLANs aren't really about enabling anything that can't be done without them.

Instead it's almost exclusively about FORBIDDING some kinds of interactions that are otherwise allowed by default.

So if your question is "do I need VLAN to enable any features", then the answer is no, you don't (almost certainly, I'm sure there are some weird corner cases and exceptions).

What VLANs can help you do is stop your PoE camera from talking to your KNX and your Chromecast from talking to your Switch. But why would you want that? They don't normally talk to each other anyway. Right. That "normally" is exactly the case: one major benefit of having VLANs is not just stopping "normal" phone-homes but to contain any security incidents to as small a scope as possible. Imagine if someone figured out a way to hack your switch (maybe even remotely while you're out!). That would be bad. What would be worse is if that attacker then suddenly has access to your pihole (which is password protected and the password never flies around your home network unencrypted, right?!) or your PC or your phone ...

So having separate VLANs where each one contains only devices that need to talk to each other can severely restrict the actual impact of a security issue with any of your devices.

3 more...

This feels like a XY problem. To be able to provide a useful answer to you, we'd need to know what exactly you're trying to achieve. What goal are you trying to achieve with the VPN and what goal are you trying to achieve by using the client IP?

It isn't and it's a good idea.

But somehow the US doesn't seem to be as good at having one as they might want to think:

https://en.wikipedia.org/wiki/World\_Press\_Freedom\_Index

It's not terrible in that index, but it's below most European countries.

Edit: or maybe you prefer an index by a US instituation: https://en.wikipedia.org/wiki/Freedom\_of\_the\_Press\_(report) the ranking looks pretty similar, though.

3 more...

I personally prefer podman, due to its rootless mode being "more default" than in docker (rootless docker works, but it's basically an afterthought).

That being said: there's just so many tutorials, tools and other resources that assume docker by default that starting with docker is definitely the less cumbersome approach. It's not that podman is signficantly harder or has many big differences, but all the tutorials are basically written with docker as the first target in mind.

In my homelab the progression was docker -> rootless docker -> podman and the last step isn't fully done yet, so I'm currently running a mix of rootless docker and podman.

My goal is to set up my services so that they can mostly live with limited connectivity. Because either my phone has no internet or my at-home ISP craps its pants, but either one will happen sometime.

So it's more about being able to gracefully resume than "perfect access".

In other words: if something stops syncing or I can't access some specific service that's mostly acceptable to me. What isn't acceptable is if the syncing got into a state that needed intervention to fix or one of my services didn't come back when service is restored.

So in a sense resilience is more important than 100% accessibility.

The small number of exceptions (mostly password saves and other minor bits) I make sure to actively sync to my personal devices so that if my selfhosted stuff goes away I'm not 100% stranded.