BB_C

@BB_C@programming.dev
0 Post – 24 Comments
Joined 1 years ago

A reminder that the Servo project has resumed active development since the start of 2023, and is making good progress every month.

If you're looking for a serious in-progress effort to create a new open, safe, performant, independent, and fully-featured web engine, that's the one you should be keeping an eye on.

It won't be easy trying to catch up to continuously evolving and changing web standards, but that's the only effort with a chance.

3 more...

Yesterday I was browsing /r/programming

:tabclose

While pure Python code should work unchanged, code written in other languages or using the CPython C API may not. The GIL was implicitly protecting a lot of thread-unsafe C, C++, Cython, Fortran, etc. code - and now it no longer does. Which may lead to all sorts of fun outcomes (crashes, intermittent incorrect behavior, etc.).

:tabclose

Meh, everyone scaring you into thinking you don't own your own mind.

Assuming your boss is not the dangerous kind (beyond legal threats), and if the goal is to make it FOSS, then do it using an alias first. Do it differently. Use components/libs/algos from other people at first, even if they are not perfect. Make those parts easily pluggable/replaceable which would be good design anyway. The code then wouldn't be wholly yours, not even your alias self.

You can join the project later with your real identity as an interested domain expert (maybe a bit after not working for the same boss). Start contributing. Become a maintainer. And maybe take over after a while. You can start replacing non-optimal components/libs/algos with better ones piecemeal.

Oh, and if Rust wasn't the choice of implementation, use it this time.

5 more...

I for one am happy we’re getting an alternative to the Chrome/Firefox duality we’re stuck with.

Anyone serious about that would be sending their money towards Servo, which resumed active development since the start of 2023, and is making good progress every month.

I would say nothing but "Good Luck" to other from-scratch efforts, but It's hard not to see them as memes with small cultist followings living on hope and hype.

1 more...

What's needed is renewed ethos, not just fresh blood.

What's needed is people who actually like the projects, on the technical level, and use them daily. Not people who are just trying to maintain an open-source "portfolio" they can showcase in pursuit of landing big corpo job.

A "portfolio" which also needs to, in their mind, project certain culture war prioritizations and positionings that are fully inline with the ones corpos are projecting.

It will be interesting to see how much of the facade of morality will remain if these corpo projections change, or when the corpo priorities and positionings, by design, don't care, at best, about little unimportant stuff like American-uniparty-assisted genocide! We got to see murmurs of that in the last few months.

Will the facade be exposed, or will it simply change face? What if a job was on the line?

I'm reminded of a certain person with the initials S.K,, who was a Rust official, and a pretend Windows-user in hopes of landing a Microsoft job (he pretty much said as much). He was also a big culture-war-style moral posturer. And a post-open-source world hypothesiser.

Was it weird for such a supposed moral "progressive" to be a big nu-Microsoft admirer? and one who used his position to push for the idea that anyone who maintained a classical open-source/free-software position towards Microsoft is a fanatic? No, it wasn't. He was one of many after all.

All these things go hand in hand. And if you think this is a derailing comment that went way off the rails, then I hope you maintain the same position about the effects of all this on the open-source and free-software world itself.

Federation is irrelevant. Matrix is federated, yet most communities and users would lose communication if matrix.org got offline.

With, transport-only distributablity, which i think is what radicale offers, availability would depend on the peers. That means probably less availability than a big service host.

Distributed transport and storage would fix this. a la something like Tahoe-LAFS or (old) Freenet/Hyphanet. And no, IPFS is not an option because it's generally a meme, and is pull-based, and have availability/longevity problems with metadata alone. iroh claims to be less of a meme, but I don't know if they fixed any of the big design (or rather lack of design) problems.

At the end of the day, people can live with GitHub/GitLab/... going down for a few minutes every other week, or 1-2 hours every other month, as the benefits outweigh the occasional inconvenience by a big margin.

And git itself is distributed anyway. So it's not like anyone was cut from committing work locally or pushing commits to a mirror.

I guess waiting on CI runs would be the most relevant inconvenience. But that's not a distributable part of any service/implementation that exists, or can exist without being quickly gravely abused.

don’t do it during working hours (especially commits - if you’re paranoid, use tor)

I wanted to mention not using personal emails or committing from home IP addresses, but thought that was needless to say.

Gitorious-ed too fast.
Let's see if you'll get a team around federation now.

start a process within a specific veth

That sentence doesn't make any sense.

Processes run in network namespaces (netns), and that's exactly what ip netns exec does.

A newly created netns via ip netns add has no network connectivity at all. Even (private) localhost is down and you have to run ip link set lo up to bring it up.

You use veth pairs to connect a virtual device in a network namespace, with a virtual device in the default namespace (or another namespace with internet connectivity).

You route the VPN server address via the netns veth device and nothing else. Then you run wireguard/OpenVPN inside netns.

Avoid using systemd since it runs in the default netns by default, even if called from a process running in another netns.

The way I do it is:

  1. A script for all the network setup:
ns_con AA
  1. A script to run a process in a netns (basically a wrapper around ip netns exec):
ns_run AA 
  1. Run a termnal app using 2.
  2. Run a tmux session on a separate socket inside terminal app. e.g.
export DISPLAY=:0 # for X11
export XDG_RUNTIME_DIR=/run/user/1000 # to connect to already running pipewire...
# double check this is running in AA ns
tmux -f -f  -L NS_AA

I have this in my tmux config:

set-option -g status-left "[#{b:socket_path}:#I] "

So I always know which socket a tmux session is running on. You can include network info there if you're still not confident in your setup.

Now, I can detach that tmux session. Reattaching with tmux -L NS_AA attach from anywhere will give me the session still running in AA.

Weak use-case.
Wrong solution (IMHO).

If one must use a header for this, how Zapier or Clearbit do it, as mentioned in appendix A.2, is the way to go.

Bloating HTTP and its implementations for REST-specific use-cases shouldn't be generally accepted.

7 more...

Alright. Explain this snippet and what you think it achieves:

tokio::task::spawn_blocking(move || -> Result { Ok(walkdir) })
2 more...

You don't even need full-fledged containers for that btw.

Learn how to script with ip netns and veth.

2 more...

Post the original code to !rust@programming.dev and point to where you got stock, because that AI output is nonsensical to the point where I'm not sure what excited you about it. A self-contained example would be ideal, otherwise, include the crates you're using (or the use statements).

5 more...

Keep (Neo)Vim out of this.

Bringing vimperator/pentadactyl back! That would be the dream.

Anyway, last time I tested it (~3 weeks ago), servo was not very usable still with the few websites I tried. Hopefully it gets, at least partway, there in a few months.

Definitely don't use axum, which provides a simple interface for routes by using derived traits. Their release cycle is way shorter, which makes them more dangerous, and they're part of the same github user as tokio, which means they're shilling their own product.

this but (semi)-unironucally

@FizzyOrange@programming.dev

BTW, the snippet I pointed to, and the whole match block, is not incoherent. It's useless.

Yeah, sorry. My comment was maybe too curt.

My thoughts are similar to those shared by @Domi in a top comment. If an API user is expected to be wary enough to check for such a header, then they would also be wary enough to check the response of an endpoint dedicated to communicating such deprecation info, or wary enough to notice API requests being redirected to a path indicating deprecation.

I mentioned Zapier or Clearbit as examples of doing it in what I humbly consider the wrong way, but still a way that doesn't bloat the HTTP standard.

The Rust hype at least makes sense.

In technical context, yes. I'm a Rustacean myself.
In business/marketing context, ...

1 more...

Proper HTTP implementations in proper languages utilize header-name enums for strict checking/matching, and for performance by e.g. skipping unnecessary string allocations, not keeping known strings around, ..etc. Every standard header name will have to added as a variant to such enums, and its string representation as a constant/static.

Not sure how you thought that shares equivalency with random JSON field names.

3 more...

You just referenced two languages that don't have proper sum types. lol.

Also mentioning Microsoft tech while a certain world event is taking place right now. lol.

1 more...

There is a YouTube video in Servo's homepage.
The first minutes of that video answer your question.

It's not you who needs it.
It's for buzzword chasers and cost cutters.

Rust (=> fast and hip)
Shared (=> outsourced)
AI generated (=> robot devs)

Get it?

3 more...