IAm_A_Complete_Idiot

@IAm_A_Complete_Idiot@sh.itjust.works
0 Post – 61 Comments
Joined 1 years ago

When you make a project with git, what you're doing is essentially making a database to control a sequence of changes (or history) that build up your codebase. You can send this database to someone else (or in other words they can clone it), and they can make their own changes on top. If they want to send you changes back, they can send you "patches" to apply on your own database (or rather, your own history).

Note: everything here is decentralized. Everyone has the entire history, and they send history they want others to have. Now, this can be a hassle with many developers involved. You can imagine sending everyone patches, and them putting it into their own tree, and vice versa. It's a pain for coordination. So in practice what ends up happening is we have a few (or often, one) repo that works as a source of truth. Everyone sends patches to that repo - and pulls down patches from that repo. That's where code forges like GitHub come in. Their job is to control this source of truth repo, and essentially coordinate what patches are "officially" in the code.

In practice, even things like the Linux kernel have sources of truth. Linus's tree is the "true" Linux, all the maintainers have their own tree that works as the source of truth for their own version of Linux (which they send changes back to Linus when ready), and so on. Your company might have their own repo for their internal project to send to the maintainers as well.

In practice that means everyone has a copy of the entire repo, but we designate one repo as the real one for the project at hand. This entire (somewhat convoluted mess) is just a way to decide - "where do I get my changes from". Sending your changes to everyone doesn't scale, so in practice we just choose who everyone coordinates with.

Git is completely decentralized (it's just a database - and everyone has their own copy), but project development isn't. Code forges like GitHub just represent that.

3 more...

Second person excited for bcachefs, I'm planning on swapping over as soon as it supports scrubbing.

2 more...

See if you can find a book on python, and work through it a bit. Sit down with him once you know some and try making something basic with turtle or the likes. Your goal is to keep his interest up and not make it a "studying" thing. For a kid the most important part is that he needs to be able to see results of what he's making. Drawing simple shapes, cool patterns, etc. in python is a nice way to start and it can teach all the basic initial things he needs to know.

There's also simple robot kits for kids that could be fun to play with, which he could eventually move on to basic electronics to after from.

W.r.t. safe browsing, I'd try blocking egregiously bad stuff with some DNS blocker that you either buy or host using something like pihole. Use it to block ads and well known "bad" domain names. Also have a conversation about it. (I'm not sure how much this helps here considering he's 8... but better then nothing.)

Because CDNs lighten load and work as a global cache for load times? Game servers and plenty of other types of servers have exposed their IP since the dawn of time.

Linus has stepped away from kernel development before, and probably will again. Life continues on.

There's built in functions to leak memory that are perfectly safe. You can also do one really trivially by making a reference count cycle. https://doc.rust-lang.org/book/ch15-06-reference-cycles.html

Rust only prevents memory unsafety - and memory leaks are perfectly safe. It's use after frees, double frees, etc. It prevents.

5 more...

Yeah, and Linux still doesn't have a good answer to AD for managing suites of end user machines. Linux has a lot going for it - but windows isn't strictly inferior or anything.

Honestly, the entire AD suite with auth and everything else built in is genuinely a good product. And if what you want is supported by Microsoft, their other services are decent as well.

9 more...

Idk about everyone else but I was fine with the specs. A basic Linux machine that can hook up to the network and run simple python scripts was plenty for a ton of use cases. They didn't need to be desktop competitors. The market didn't need to be small form factor high performance machines, and I'd argue it wasn't.

Rust doesn't guarantee the lack of memory leaks anymore then java/C++ does, so sadly not sure if it would help here. :)

9 more...

How would rust fare any better then a tracing GC? Realistically I'd expect them to use more memory, and also have worse determinism in memory management - but I fail to really see a case where rust would prevent memory leaks and GC languages wouldn't.

The point of federation means your content doesn't only stay on your server. The person you're talking too can be on a different one and their admin can see them too. Also, I wouldn't want to be able to access content from any user - it's a "no trust needed" thing.

2 more...

Instances aren't banning other instances for federation with communities they dislike. Instances ban other instances for hosting content they dislike. The benefit of starting an instance is you choose who to federate with.

According to the benchmark in the article it's already way faster at n = 1000. I think you're overestimating the cost of multiplication relative to just cutting down n logarithmically.

log_2(1000) = roughly a growth factor of 10. 2000 would be 11, and 4000 would be 12. Logs are crazy.

2 more...

The idea is malware you installed would presumably run under your user account and have access. You could explicitly give it different UIDs or even containerize it to counteract that, but by default a process can access everything it's UID can, which isn't great. And even still to this day that's how users execute a lot of processes.

Windows isn't much better here, though.

3 more...

Web 3 is different things depending on who you ask. Block chain, decentralization, or whatever else. We dunno, we aren't there yet. I personally believe federated services have a chance of being web 3 (and Blockchain is not relevant).

Web 2 is basically big tech on the internet, everything becoming centralized. Everything became easy to use for the end user, all point and click.

Web 1 was the stuff prior to that, when the internet was the wild west.

The vulnerability has nothing to do with accidentally logging sensitive information, but crafting a special payload to be logged which gets glibc to write memory it isn't supposed to write into because it didn't allocate memory properly. glibc goes too far outside of the scope of its allocation and writes into other memory regions, which an attacked could carefully hand craft to look how they want.

Other languages wouldn't have this issue because

  1. they wouldn't willy nilly allocate a pointer directly like this, but rather make a safer abstraction type on top (like a C++ vector), and

  2. they'd have bounds checking when the compiler can't prove you can go outside of valid memory regions. (Manually calling .at() in C++, or even better - using a language like rust which makes bounds checks default and unchecked access be opt in with a special method).

Edit: C's bad security is well known - it's the primary motivator for introducing rust into the kernel. Google / Microsoft both report 70% of their security vulnerabilities come from C specific issues, curl maintainer talks about how they use different sanitizers and best practices and still run into the same issues, and even ubiquitous and security critical libraries and tools like sudo + polkit suffer from them regularly.

Containers don't typically have inits, your process is the init - so no extra processes are started for things other than what you care about.

Kanidm wants to directly have access to the letsencrypt cert. It refuses to even serve over HTTP, or put any traffic over it since that could allow potentially bad configurations. It has a really stringent policy surrounding how opinionated it is about security.

1 more...

Only for it's child processes, e.g. call a bash script with a modified PATH. Still problematic though.

...I suppose they could also modify your .bashrc equivalent.

Not in this one, iirc they actually reverse engineered and were working off of apple libraries, rather than proxies.

The point is to minimize privilege to the least possible - not to make it impossible to create higher privileged containers. If a container doesn't need to get direct raw hardware access, manage low ports on the host network, etc. then why should I give it root and let it be able to do those things? Mapping it to a user, controlling what resources it has access to, and restricting it's capabilities means that in the event that my container gets compromised, my entire host isn't necessarily screwed.

We're not saying "sudo shouldn't be able to run as root" but that "by default things shouldn't be run with sudo - and you need a compelling reason to swap over when you do"

I think the key there is funding from big companies. There's tons of standards and the like in which big companies take part - both in terms of code and financial support. Big projects like the rust compiler, the Linux kernel, blender, etc. all seem to have a lot of code and money coming in from big companies. Sadly there's only so much you can get from individuals - pretty much the only success story I know of is the wikimedia foundation.

5 more...

How much of that is what GitHub encourages and how much of that is what Users prefer? Plenty of users seem to enjoy phabricator / Gerrit for code review in practice precisely because of their workflows.

4 more...

The version control system, and all the associated code isn't tied to any system - yes.

Not having to swap over to a ticketing system just to see the context of a change is really nice (Or to add context on why changes are done a certain way). One line that says what you changed, then any context such as why it was done that way, and important notes about that change works wonders. It's pretty much the exact model the Linux kernel uses, and it makes looking at changes great for anyone down the line.

Also, GitHub PRs atleast to me feel like they encourage reviewing changes by the total diff of the entire PR, and not each commit. I don't want a slog of commits that don't add any value - it just makes doing things like reverts more annoying. Stuff like Gerrit and phabricator enforce reviews by making you review individual commits / changes / whatever you want to call them and not branch diffs.

6 more...

There's a transaction fee, the higher you pay the more priority you have (since miners get a cut).

I don't think the SATA acronym is right...

Also, it's worth noting that cargo is a fairly good package manager all things considered. It has a native lock file format, unlike requirements.txt. Running code that's built with cargo typically just works with cargo build. No futzing around with special build commands for your specific build tooling like in js. I can't speak for maven since I've only used it a little bit and never used it enough to be comfortable with it... but cargo just doesn't really have many major paper cuts.

Admittedly, cargo isn't really special - it's just a classic package manager that learned from the hindsight of its predecessors. It's all minor improvements if any. There's actually innovative build tooling out there: things like buck2, nix, etc. But those are an entirely different ball game.

2 more...

I have an auto deployed server with only a root user and service accounts... I think that's valid. :)

Not OP, but personally yes. Every code forge supporting only git just further enforces git's monopoly on the VCS space. Git isn't perfect, nor should be treated as perfect.

The above is probably the reason why so many alternative VCS's have to cludge themselves onto git's file format despite likely being better served with their own.

Interesting new VCS's, all supporting their own native format as well for various reasons:

  • pijul
  • sapling
  • jujutsu

Sapling is developed by meta, jujutsu by an engineer at Google. Pijul is not tied to any company and was developed by an academic iirc. If you're okay with not new:

  • mercurial
  • fossil
  • darcs

VCS's are still being itterated on and tooling being super git centric hurts that.

3 more...

It being objectively better then SVN or CVS doesn't mean that it's the best we can do. Git has all sorts of non-ideal behaviors that other VCS's don't. Pijul's data structure for instance is inherently different from git and it can't be retrofitted on top. Making tooling only support git effectively kills off any potential competitors that could be superior to git.

One example is pijul specifically let's you get away from the idea that moving commits between branches changes their identity, because pijul builds a tree of diffs. If two subtrees of diffs are distinct, they can always be applied without changing identity of those diffs. This means "cherry picking" a commit and then merging a commit doesn't effectively merge that commit twice resulting in a merge conflict.

That's one example how one VCS can be better.

1 more...

The proper way to handle issues like these is process level permissions (i.e. capability systems), instead of user level. Linux CGroups, namespaces, etc. are already moving that way, and in effect that's the way windows is trying to head too. (Windows has its own form of containerization called AppContainers, which UWP apps use. Windows also has its own capability system).

Wikimedia foundation is, none of the other things I listed are.

3 more...

This is about PKI. An HTTPS server has a TLS cert, and that TLS cert is signed by / created by a certificate authority (or CA). When you connect to a service over HTTPS, a TLS handshake happens. The handshake starts by the client asking a server to setup a session, and the server hands back it's certificate. This certificate can be used to encrypt traffic, but not decrypt it. The client makes sure the certificate is signed by a CA it trusts (such as let's encrypt).

Once the client has this certificate, it sends a key to the server in encrypted form, and the server decrypts it. They both now use this key to communicate.

The MITM server can't compromise the session because: If it swaps the certificate (or in other words, the encryption key the server sent), that key won't be trusted because it isn't signed by a CA the client trusts.

If the MITM tries to send its own shared key signed by the servers certificate - it doesn't really matter since it can't read the clients messages anyways to get the shared key from the client. If it forwards it, then you effectively have two separate https sessions with their own keys, and the server will treat them as distinct.

I think the more intuitive model (to me) is instead of thinking of it as a lightweight virtual machine, or a neatly packaged up OS, is to instead think of it as a process shipped with an environment. That environment includes things like files and other executables (like apt), but in of itself doesn't constitute an OS. It doesn't have its own filesystems, drivers, or anything like that. By default it doesn't run an init system like systemd either, nor does it run any other applications other than the process you execute in the environment.

Rust does this too. In practice you just bump the lock file in rust and rebuild. It can be a bit rebuild heavy, but it's not too bad with a proper cache.

Problem is this assumes that everyone has to build their own captcha solver. It's definitely a bare minimum standard barrier to entry, but it's really not a sustainable solution to begin with.

Kanidm doesn't require a CA, it just requires a cert for serving https (and it enforces https - it refuses to even serve over HTTP). I think that was just the OP not quite understanding the conceptual ideas at play.

Yeah. There's reasoning for why they do it on their docs, but the reasoning iirc is kanidm is a security critical resource, and it aims to not even allow any kind of insecure configuration. Even on the local network. All traffic to and from kanidm should be encrypted with TLS. I think they let you use self signed certs though?