PaX [comrade/them, they/them]

@PaX [comrade/them, they/them]@hexbear.net
2 Post – 32 Comments
Joined 1 years ago

Very tired nerd who doesn't know how to speak correctly

Ask me about floppa, Plan 9, or computer architecture or anything computers really (if you want)

The only zoomer qualified to operate an RBMK reactor

Researcher of rare and powerful beanis

:cat-vibing:

No, by running a relay or exit node you are opting in to routing traffic that could contain CSAM. This is a problem with all anonymous unmoderated distributed systems like Tor. With Freenet, for example, you're even opting in to storing it (pieces of it in encrypted form that can't be accessed without the content hash key).

Privacy is good but so is censorship (moderation). The censorship just needs to be implemented by an accountable group of people that share the same interests of the users. Tor is trying to solve a problem that can only be solved through social struggle with institutions of power.

1 more...

Thanks for deleting your post. I hope our two communities can live and share in peace. There are certainly some hexbears over here who have been too antagonistic as well as some people here who seem to want to stir things up.

:blahaj-heart: (we have no such emoji on our instance but maybe you do lol)

Running grep without parameters is also pretty fucking useless.

The difference is grep is a simple tool that can take in text, transform it, and output it to a console. It operates in a powerful and easy to understand way by default (take in text and print lines in the text containing the search parameters). This vmalert tool is just an interface to another, even more complicated piece of software.

Claims to have a Unix background, doesn't RTFM.

Since when do Unix tools output 3,000 word long usage info? Even GNU tools don't even come close...

Translation: Author does not understand APIs.

The point is that these abstractions do not mesh with the rest of the system. HTTP and REST are very strange ways to accomplish IPC or networked communication on Unix when someone would normally accomplish the same thing with signals, POSIX IPC, a simpler protocol over TCP with BSD sockets, or any other thing already in the base system. It does make sense to develop things this way, though, if you're a corpo web company trying to manage ad-hoc grids of Linux systems for your own profit rather than trying to further the development of the base system.

Ok. Now give me high availability

I would hope the filesystems you use are "high availability" lol

atomic writes to sets of keys

You're right, that would be nice. Someone should put together a Plan 9 fileserver that can do that or something.

caching, access control

Plan 9 is capable of handling distributed access controls and caching (even of remote fileservers!). There's probably some Linux filesystems that can do that too.

In the end, it's not so much about specific tools that can accomplish this but that there are alternatives to the dominant way of doing things and that the humble file metaphor can still represent these concepts in a simpler and more robust way.

This reads as "I applied to the jobs and got rejected. There's nothing wrong with me, so the jobs must be broken".

This is the maybe the worst way of interpreting what they said. They can come and correct me if I'm wrong but I read that as: they have a particular ideological objection to this "cloud" ecosystem and the way it does things. It's not a lack of skill as your comment implies but rather a rejection of this way of doing things.

18 more...

Another thing to keep in mind is that imperialism also has the effect of driving down wages in the imperial core since the capitalist can pay their workers less if the price of basic, essential commodities can be decreased by super-exploitation in the imperial periphery. This is a major reason why real wages in the US have been stagnant for a while, for example. So this would have a counterbalancing effect on how much a first-world worker would need to pay proportionally to their income for a case of soda if the process of imperialism were ended.

Why don't they just move to El Salvador if they like Bitcoin so much smuglord very-intelligent

The vast majority of drivers are included with the Linux kernel now (in tree) so the difference usually comes down to kernel version (newer kernels have more drivers, of course) or kernel configuration set at compile-time (this can be anything from including or not including drivers, to turning driver features on and off, or more fundamental changes beyond drivers)

You can get kernel version info from uname -a and a lot of the time, probably most of the time (this is also down to configuration), you can get kernel configuration info from /proc/config.gz (use gzip -d to decompress) or something like /boot/config

Then you can run diff on configurations of 2 different distro kernels you're interested in to see how the 2 distribution's kernels were set up differently

This could also be caused by different setups of userspace tools or UI that interact with these drivers in different, sometimes worse ways but this is usually much less likely in my experience (most Linux distros do things like this the same way these days tbh)

Oh, also, there are a lot of drivers that require vendor-supplied firmware or binary blobs to function and most of the time distros don't bake these into the kernel (although it is possible) and different distros might have more or less of these blobs available or installed by default or they might be packaged differently. The kernel should print an error message if it can't find blobs it needs though

I guess there's kinda a lot to consider lol. Sorry if all of this is obvious

What hardware are you talking about specifically?

You just said that this software was much more complex than Unix tools.

That's the problem. The reason Unix became so popular is because it has a highly integrated design and a few very reused abstractions. A lot of simple parts build up in predictable ways to accomplish big things. The complexity is spread out and minimized. The traditional Unix way of doing things is definitely very outdated though. A modern Unix system is like a 100 story skyscraper with the bottom 20 floors nearly abandoned.

Kubernetes and its users would probably be happier if it was used to manage a completely different operating system. In the end, Kubernetes is trying to impose a semi-distributed model of computation on a very NOT distributed operating system to the detriment of system complexity, maintainability, and security.

Until you need authentication, out of the box libraries, observability instrumentation, interoperability... which can be done much more easily with a mature communication protocol like HTTP.

I agree that universal protocols capable of handling these things are definitely useful. This is why the authors of Unix moved away from communication and protocols that only function on a single system when they were developing Plan 9 and developed the Plan 9 Filesystem Protocol as the universal system "bus" protocol capable of working over networks and on the same physical system. I don't bring this up to be an evangelist. I just want to emphasize that there are alternative ways of doing things. 9P is much simpler and more elegant than HTTP. Also, many of the people who worked on Plan 9 ended up working for Google and having some influence over the design of things there.

They're not, and I'm disappointed that you think they are. Any individual filesystem is a single point of failure. High availability lets me take down an entire system with zero service disruption because there's redundancy, load balancing, disaster recovery...

A filesystem does not exclusively mean an on-disk representation of a tree of files with a single physical point of origin. A filesystem can be just as "highly available" and distributed as any other way of representing resources of a system if not more so because of its abstractness. Also, you're "disappointed" in me? Lmao

They can, and they still do... Inside the container.

And how do you manage containers? With bespoke tools and infrastructure removed from the file abstraction. Which is another way Kubernetes is removed from the Unix way of doing things. Unless I'm mistaken, it's been a long time since I touched Kubernetes.

because rejecting a way of doing things based on preconception is a lack of flexibility

It's not a preconception. They engaged with your way of doing things and didn't like it.

in cloud ecosystems that translates to a lack of skill.

By what standard? The standard of you and your employer? In general, you seem to be under the impression that the conventional hegemonic corporate "cloud" way of doing things is the only correct way and that everyone else is unskilled and not flexible.

I'm not saying that this approach doesn't have merits, just that you should be more open-minded and not judge everyone else seeking a different path to the conventional model of cloud/distributed computing as naive, unskilled people making "bad-faith arguments".

2 more...

I've struggled with many of the same tools. What we need is real distributed operating systems, like Plan 9, not increasingly complicated hacks and kludges to keep old-world operating systems relevant in a networked world.

We are so back

True, but a man page is a different thing from a tool's built-in usage information.

2 more...

I'm glad I at least got closer to understanding your criticism than they did.

Don't let anyone tell you you're old or naive or "stuck in the past" for thinking these things! There is a real crisis in the operating systems world that your criticism is reflecting. It takes an army of software engineers and billions of dollars to keep this ecosystem and these systems going and they still struggle with reliability and security. The reason it's like this is an issue of economic organization.

We can't go back to the old way of doing things but we can't keep maintaining these fundamentally flawed systems either. You may find something inspiring in this brief presentation by Rob Pike: http://doc.cat-v.org/bell_labs/utah2000/

2 more...

sicko-mega Plan 9 posting

OpenBSD, RISC-V, and 9front mentioned?

sicko-yes

Haven't listened to BSDNow in a while, but maybe I'll listen to this episode

yea

I uhh wasn't literally about to go do this or anything.......

That's a great way of putting it, thanks. I'm actually only 30 years old (lol).

Yeahh, and I saw someone compare you to the "old man yelling at cloud" lol. Even though there are good reasons to yell at the cloud hehe

Sometimes I feel there's so few people who've ever used or written software at this level in the part of the industry I find myself in. It seems more common to throw money at Amazon, Microsoft, and more staff.

I've replaced big Java systems with small Go programs and rescued stalled projects trying to adopt Kubernetes. My fave was a failed attempt to adopt k8s for fault tolerance when all that was going on was an inability to code around TCP resets (concurrent programming helped here). That team wasn't "unskilled"; they were just normal people being crushed by complexity. I could help because they just weren't familiar with the kind of problem solving I was, nor what tooling is available without installing extra stuff and dependencies.

I haven't had the "privilege" of working for a wage in the industry (and I still don't know if I want to) but I think I know what you mean. I've seen this kind of tendency even in my friends who do work in it. There is less and less of a focus on a whole-system kind of understanding of this technology in favor of an increased division of labor to make workers more interchangeable. Capitalists don't want people with particular approaches capable of complex problem-solving and elegant solutions to problems; they want easily-replaceable code monkeys who can churn out products. Perhaps there is a parallel here with what happened to small-scale artisan producers of commodities in early capitalism as they were wiped out and absorbed into manufactories and forced to do ever-increasingly small and repetitive tasks as part of the manufacture of something they once produced from scratch to final product in a whole process. Especially concerning is the increasing use of AI by employed programmers. Well, usually their companies forcing them to use AI to try to automate their work.

And like you gave an example of, this has real bad effects on the quality of the product and the team that develops it. From the universities to the workplace, workers in this industry are educated in the virtues of object-oriented programming, encapsulation, tooling provided by the big tech monopolies, etc. All methods of trying to separate programmers from each other's work and the systems they work on as a whole and make them dependent on frameworks sold or open-sourced™ by tech monopolies at the expense of creative and free problem-solving.

Glad at least you were able to unstall some of the projects you've been involved in!

Thanks for your understanding :)

Glad we could share ideas :3

You and other people in the thread gave me a lot to think about. Hope this comment made some sense lol.

There are some purpose-built ARM Linux laptops available but as an owner of an unused Pinebook Pro.... can't recommend agony-deep

Walking the path of a PC hater is not easy

This has always felt untrue to me. The command line has always been simple parts. However we cannot argue that this applies to all Unix-like systems: The monolithic Linux kernel, Kerberos, httpd, SAMBA, X windowing, heck even OpenSSL. There's many examples of tooling built on top of Unix systems that don't follow that philosophy.

I can see why you would come to think that if all you've been exposed to is Linux and its orbiting ecosystem. I agree with you that modern Unix has failed to live up to its ideals. Even its creators began to see its limitations in the late 80s and began to develop a whole new system from scratch.

Depends on what you mean. "Everything is a file"? Sure, that metaphor can be put to rest.

That was never true in the first place. Very few things under Unix are actually represented as files (though credit to Linux for pursuing this idea in kernel-space more than most). But Plan 9 shows us this metaphor is worth expanding and exploring in how it can accomplish being a reliable, performant distributed operating system with a fraction of the code required by other systems.

Kubernetes is more complex than a single Unix system. It is less complex than manually configuring multiple systems to give the same benefits of Kubernetes in terms of automatic reconciliation, failure recovery, and declarative configuration. This is because those three are first class citizens in Kubernetes, whereas they're just afterthoughts in traditional systems. This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

My point is Kubernetes is a hack (a useful hack!) to synchronize multiple separate, different systems in certain ways. It cannot provide anything close to something like a single system image and it can't bridge the discrete model of computation that Unix assumes.

This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

All these features require a lot of code and complexity to maintain (latest info I can find is almost 2 million as of 2018). Ideally, Kubernetes is capable of what you said, in the same way that ideally programs can't violate Unix filesystem DAC or other user permissions but in practice every line of code is another opportunity for something to go wrong...

Just because something has more security features doesn't mean it's actually secure. Or that it's maintainable without a company with thousands of engineers and tons of money maintaining for you. Keeping you in a dependent relationship.

It also has negligible adoption compared to HTTP. And unless it provides an order of magnitude advantage over HTTP, then it's going to be unlikely that developers will use it. Consider git vs mercurial. Is the latter better than git? Almost certainly. Is it 10x better? No, and that's why it finds it hard to gain traction against git.

So? I don't expect many of these ideas will be adopted in the mainstream under the monopoly-capitalist market system. It's way more profitable to keep selling support to manage sprawling and complex systems that require armies of software engineers to upkeep. I think if state investment or public research in general becomes relevant again maybe these ideas will be investigated and adopted for their technical merit.

Even an online filesystem does not guarantee high availability. If I want highly available data I still need to have replication, leader election, load balancing, failure detection, traffic routing, and geographic distribution. You don't do those in the filesystem layer, you do them in the application layer.

"Highly available" is carrying a lot of weight there lol. If we can move some of these qualities into a filesystem layer (which is a userspace application on some systems) and get these benefits for free for all data, why shouldn't we? The filesystem layer and application layer are not 2 fundamentally separate unrelated parts of a whole.

Nice ad hominem. I guess it's rules for thee, but not for me.

Lol, stop being condescending and I won't respond in kind.

So what's the problem? Didn't you just say that the Unix way of doing things is outdated?

I think the reason the Unix way of doing things is outdated is cuz it didn't go far enough!

Dismissal based on flawed anecdote is preconception.

What? lol

It's not a flawed anecdote or a preconception. They had their own personal experience with a cloud tool and didn't like it.

You can't smuglord someone into liking something.

I'd rather hire an open-mined junior than a gray-bearded Unix wizard that dismisses anything unfamilar.

I'm not a gray-bearded Unix wizard and I'm not dismissing these tools because they're unfamiliar. I have technical criticism of them and their approach. I think the OP feels the same way.

The assumption among certain computer touchers is that you can't use Kubernetes or "cloud" tools and not come away loving them. So if someone doesn't like them they must not really understand them!

It's hard to not take that as bad faith.

They probably could've said it nicer. It's still no excuse to dismiss criticism because you didn't like the tone.

I think Kubernetes has its uses, for now. But it's still a fundamentally limited and harmful (because of its monopolistic maintainers/creators) way to do a kind of distributed computing. I don't think anyone is coming for you to take your Kubernetes though...

👁 Imagine using any commercial firmware/hardware RNGs. snowden

1 more...

Yeahh, you have a good point lol. Bash and the GNU ecosystem have developed their own sprawling problems.

I like Alpine Linux. You could also try OpenBSD if you want a Unix that just works without as much struggle. NetBSD and FreeBSD are also around and have Linux binary compatibility.

Xorg? Wayland? You have bespoke protocols just for windowed graphics? I'm happy with my /dev/draw and /dev/wsys/* grillman

Unix is a zombie OS that should probably die

I simply do both

It's not working agony-deep

Ohh that's true, I didn't think about that. It would be difficult to route anything through it unless you were connected directly to it with nothing in-between because no other router would forward packets destined for somewhere else to my machine (except maybe in the extremely unlikely case of source routing?). It seems obvious now lol, thank you!

I'll write some firewall rules just in case

Cygwin is great too! You can have a fully POSIX-compliant environment on Windows, no virtualization or anything needed. You can even distribute programs to other Windows users linked to their POSIX compatibility layer library.

Rip out the fan and connect the processor heatsink to a heatpipe

Then carry around a cup of water to dip the heatpipe into

This is not a bit, I am a real hardware designer

I see. Our motherboards have different chipsets (I have an X570 in mine). It probably has nothing to do with my issue...

Hoping those kernel parameters fix it. I wish I could help further. PCs are just a bottomless, mostly undocumented rabbithole :(

NTFS file locking is pain

Well Linux is using rdrand in place of the fTPM one so .. from firmware to hardware.

That depends on your distribution's setting of the CONFIG_RANDOM_TRUST_CPU compile-time configuration option and the random.trust_cpu sysctl setting. I'm not sure what the major distributions are doing with that at the moment.

Then again even if you generate random numbers using pure software, is your CPU or firmware FOSS and without bugs (cough .. Debian OpenSSL maintainers, cough ..)? If not, and you assume you can't trust the firmware and hardware - all your random numbers are belong to us.

Like you said, it is impossible to be completely safe. But using proprietary cryptographic hardware/firmware, the inner workings of which are known only to Intel, introduces a lot of risk. Especially when we know the NSA spends hundreds of millions of dollars on bribing companies to introduce backdoors into their products. At least when it's an open source cryptographic library they have to go to great lengths to create subtle bugs or broken algorithms that no one notices.

Our CPUs are certainly backdoored too, beyond RDRAND. But it's way more complicated to compromise any arbitrary cryptographic algorithm running on the CPU with a backdoor than making a flawed hardware RNG. Any individual operation making up a cryptographic algorithm can be verified to have executed properly according to the specification of the instruction set. It would be very obvious, for example, if XORing two 0s produced a 1, that something is very wrong. So a backdoor like this would have to only activate in very specific circumstances and it would be very obvious, limiting its use to specific targets. But a black box that produces random numbers is very, very difficult to verify.

Ultimately, the real solution is the dissolution of the American security state and the computer monopolies.

If I'm fucked, they're fucked.

Not if they're the only ones who know about the backdoors.

Edit: I started writing that before your edit about the "Ken Thompson hack". An element of any good backdoor would include obfuscation of its existence, of course. The issue is it is impossible to predict every possible permutation of operations that would result in discovery of the backdoor and account for them. Maybe if you had a sentient AI dynamically rewriting its own code... anyway, backdoors in tooling like compilers is very concerning. But I'm not too concerned about a Ken Thompson type attack there just because of how widely they're used, how many different environments they run in, and how scrutinized the outputted code is.

You can check to see what drivers were compiled as modules or into your kernel by reading the kernel configuration at /proc/config.gz or /boot/*config*

There might also be out-of-tree (not included with the kernel) drivers installed as packages on your system but this is very rare outside of like... having an NVIDIA card and running the closed-source vendor driver

In addition to the other answers, you might consider using getline() if you're on a POSIX-compliant system. It can automatically allocate a buffer for you.

What motherboard do you have? Also what happens exactly when the lock-ups happen? Have you ever been playing audio when the lock-ups happen and does it loop or stop or keep playing?

I recently had to "fix" (workaround) a similar issue in the OpenBSD kernel with a specific hardware peripheral on my PC (running a 2nd-gen Ryzen), the High Definition Audio controller. For whatever reason (and only when I was running OpenBSD) interrupts from the HDA controller (to let the CPU know to refill audio buffers) would just randomly stop making it to the CPU and audio would loop for a few seconds and then shut off. I spent a long time trying to figure out what causes it and reading Linux driver code but I couldn't find a cause or why only OpenBSD would trigger it. I ended up having to write kind of a hacky polling mode into the HDA driver. My only guess is some of these AMD-chipset-having motherboards have faulty interrupt controllers.

Maybe there is a similar issue with your system and timer interrupts aren't making it to your CPU or something. But I'm not really an expert on PC architecture and idek if it even works like that on PCs lol

Sorry for so many questions but do you also have any kernel logs available from when this happens?

2 more...