oranki

@oranki@sopuli.xyz
0 Post – 44 Comments
Joined 1 years ago

I started using gestures, and haven't been able to transition away since.

Both have their pros and cons.

1 more...

This must be related to people in their 20's not knowing how to read a traditional clock anymore.

In my limited experience, when Podman seems more complicated than Docker, it's because the Docker daemon runs as root and can by default do stuff Podman can't without explicitly giving it permission to do so.

99% of the stuff self-hosters run on regular rootful Docker can run with no issues using rootless Podman.

Rootless Docker is an option, but my understanding is most people don't bother with it. Whereas with Podman it's the default.

Docker is good, Podman is good. It's like comparing distros, different tools for roughly the same job.

Pods are a really powerful feature though.

4 more...

Imagine if all the people who prefer systemd would write posts like this as often as the opposition. Just use what you like, there are plenty of distros to choose from.

3 more...

For a bit enhanced log file viewing, you could use something like lnav, I think it's packaged for most distributions.

Cockpit can be useful for journald, but personally I think GUI stuff is a bit clunky for logs.

Grep, awk and sed are powerful tools, even with only basic knowledge of them. Vim in readonly mode is actually quite effective for single files too.

For aggregating multiple servers' logs good ol' rsyslog is good, but not simple to set up. There are tutorials online.

1 more...

I recently put the nvidia variant of ublue-os on my work laptop, which has Optimus graphics. Couldn't be happier.

It's great to see these variants popping up! I really think ostree may be the future for desktop Linux, and not even very far away.

6 more...

Remember to check the polarity of the plug too. Some have + in the center pin, others have -

6 more...

Most likely, a Hetzner storage box is going to be so slow you will regret it. I would just bite the bullet and upgrade the storage on Contabo.

Storage in the cloud is expensive, there's just no way around it.

1 more...

This is true, with a couple gigs of RAM and SATA storage Nextcloud is not at all bad. Assuming an instance with not that much simultaneous users.

It feels like slow sometimes, then after an hour with M365 at work it doesn't feel slow at all.

Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

The mess is only a mess if you don't really understand what you're doing, same goes for traditional services.

I'd go the SSH + sudo way.

Sudo can be quite finely tuned to only allow specific commands. If you want to lock the SSH session further, look into rbash.

1 more...

on surface they may look like they are overlapping solutions to the untrained eye.

You'll need to elaborate on this, since AFAIK Podman is literally meant as a replacement for Docker. My untrained eye can't see what your trained eye can see under the surface.

2 more...

I have a feeling you are overthinking the Matrix key system.

  • create account
  • create password you store somewhere safe
  • copy the key and store somewhere safe
  • when signing on a new device, copy-paste the key

Basically it's just another password, just one you probably can't remember.

Most of the client apps support verifying a new session by scanning a QR code or by comparing emoji. The UX of these could be better (I can never find the emoji option on Element, but it's there...). So if you have your phone signed in, just verify the sessions with that. And it's not like most people sign in on new devices all the time.

I'd give Matrix a new look if I were you.

1 more...

In Finland synchronization in gearboxes is starting to become a thing nowadays. Double clutching for 20 years now (38).

Just kidding, got my first automatic two years ago, so yes.

Perhaps I misunderstand the words "overlapping" and "hot-swappable" in this case, I'm not a native english speaker. To my knowledge they're not the same thing.

In my opinion wanting to run an extra service as root to be able to e.g. serve a webapp on an unprivileged port is just strange. But I've been using Podman for quite some time. Using Docker after Podman is a real pain, I'll give you that.

Rsyslog to collect logs to a single server, then lnav for viewing them on that server is a good combo. Oldschool but very effective for self-host scale.

Glad the tip was useful!

There's a base image of ublue, which is Silverblue without a DE. I'd suppose you can mostly just layer e.g. Sway or i3 on top.

Traditional package model will still have it's usage, of course, I agree. But if Silverblue works for a developer like me, I'd say a for more "regular" users immutable distros seem like a very viable option.

I used to run everything with Pis, but then got a x86 USFF to improve Nextcloud performance.

With the energy price madness last year in Europe, I moved most things to cloud VPSs.

One Pi is still running Home Assistant, hooked to my heating/ventilation unit via RS485/modbus.

I had a ZFS backup server with 2 HDDs hooked up over USB to a Pi 8GB. That is just way too unreliable for anything serious, I think I now have a lot of corrupted files in the backups. Looking into getting some Synology unit for that.

For anything serious that requires file storage, I'd steer clear from USB or SD cards. After getting used to SATA performance, it's hard to go back anyways. I'd really like to use the Pis, but family photo backups turning gray due to bitflips is unacceptable.

They are a great entrypoint to self-hosting and the Linux world though!

Even though you said "isn't Nextcloud", I'd still say it's perhaps the simplest solution.

You can disable most the other apps and set calendar as the landing page. If you don't use the other features, the resource usage is very low, just a cron job that does basically nothing. I don't think disabling the default apps has much effect on the footprint, by the way.

Calendar, contacts and notes are why I still self host nextcloud. Just remember to pay/donate to Davx5, they're one of the projects that need to keep running!

Wireguard runs over UDP, the port is undistinguishable from closed ports for most common port scanning bots. Changing the port will obfuscate the traffic a bit. Even if someone manages to guess the port, they'll still need to use the right key, otherwise the response is like from a wrong port - no response. Your ISP can still see that it's Wireguard traffic if they happen to be looking, but can't decipher the contents.

I would drop containers from the equation and just run Wireguard on the host. When issues arise, you'll have a hard time identifying the problem when container networking is in the mix.

1 more...

There was a good blog post about the real cost of storage, but I can't find it now.

The gist was that to store 1TB of data somewhat reliably, you probably need at least:

  • mirrored main storage 2TB
  • frequent/local backup space, also at least mirrored disks 2TB + more if using a versioned backup system
  • remote / cold storage backup space about the same as the frequent backups

Which amounts to something like 6TB of disk for 1TB of actual data. In real life you'd probably use some other level of RAID, at least for larger amounts so it's perhaps not as harsh, and compression can reduce the required backup space too.

I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there's local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that's maybe 200G of raw disk space. So 130G becomes 510G in my setup.

They could explain things better, you are right. I actually think I remember having almost the exact same confusion a few years back when I started. I still have two keys stored in my pw manager, no idea what the other one is for...

The decryption has gotten much more reliable in the past year or two, I also try out new clients a lot and have had no issues in a long time. Perhaps you could give it a new go, with the info that you use the same key for all sessions.

Rent a cheap VPS, ask your friends to gobble up the 1-2 units of local currency per month. Run a DNS over HTTPS server on the VPS (Adguard Home can do it, I'm not sure about PiHole), then just use browsers that can use a custom DoH resolver.

Don't open port 53 to the public, that's just asking for trouble. The bonus with this is the adblocking is in use on the go as well, and you can use the same server yourself.

You are right, and the fact federation is perhaps overplayed or emphasized when talking about something like Lemmy doesn't help.

The regular users don't care, as long as the content is available. Which unfortunately isn't quite the case yet (with no disrespect to developers, I think Lemmy is something I'll stick to for a good while)

Thank you!

Looking forward to Cosmic DE from Pop!OS, they're integrating tiling functionality in it.

https://blog.system76.com/post/cosmic-de-tiling-redesign-and-libcosmic-rebasing

Cloudflare has several reverse proxies all around the world. When you enable their proxy service, CF decides which proxy is used for your traffic. To be able to control this better, they need to have control over the DNS record.

If you have an issue with changing your domain's nameservers (perfectly valid), my guess is you'll also have an issue with the fact that using CF proxy essentially means they are a man-in-the-middle for all your HTTPS traffic and decrypt everything before proxying it forward.

5 more...

The reasons for having to use their nameservers is probably about getting some data in the process. But DNS queries are quite harmless compared to the MITM issue for the actual traffic.

Traffic proxied via CF uses their TLS certificates. Look up how HTTPS works, and you'll understand that it means the encryption is terminated at Cloudflare.

For the record, CF DNS infrastructure is really solid. For something already public anyway, I'd use their services in a heartbeat. You get some WAF features and can add firewall rules like geoblocking, even on the free tier.

For sensitive data, I probably wouldn't use the proxy service.

3 more...

A CNAME is just a DNS record that points to another DNS record, technically they could allow it for free users too.

I'd guess the point is they get info on what free users do with their DNS, to help make their paid services more appealing.

No offense, but you might be seriously overthinking this.

1 more...

The set up isn't actually hard at all, if you understand the concepts. Keeping off blacklists is the hard part, as big providers often block entire IP ranges due to one bad actor.

Edit: I meant sometimes your server gets blacklisted for something some neighboring server did

1 more...

+1 for rootless Podman. Kubernetes YAMLs to define pods, which are started/controlled by systemd. SELinux for added security.

Also +1 for not using auto updates. Using the latest tag has bit me more times I can count, now I only use it for testing new stuff. All the important services have at least the major version as tag.

Plain NGINX has served me well.

Most Debian based distros, actually.

I'd second this. Fedora is great, don't get me wrong, but it's not rolling or stable.

I think stable was referring to not crashing here.

No need to apologize.

You'd create a CNAME for myservice.mydomain.com, that points to proxynearorigin.cloudflare.com.

proxynearorigin.cloudflare.com contains the A and AAAA records for the reverse proxy servers. When you do a DNS query for myservice.mydomain.com, it will (eventually) resolve to the CF proxy IPs.

The CF proxies see from the traffic that you originally requested myservice.mydomain.com and serve your content based on that. This still requires you to tell Cloudflare where the origin server is so the reverse proxies can connect to it.

On the free service instead of the CNAME you set the origin server's IP as the A and/or AAAA record. Enabling the proxy service actually changes this so that when someone makes a DNS query to myservice.mydomain.com they get the proxy addresses straight as A and AAAA records, leaving the IP you originally configured known only to Cloudflare internally.

It's hard to explain this, and since I don't work at Cloudflare the details may be off too. The best way to get an idea is play around with something like NGINX and run a local DNS server (Bind, Unbound, dnsmasq, PiHole...) and see for yourself how the DNS system works.

CDN isn't really related to DNS at all. In the case of the CF free tier, it's actually more like caching static content, which is technically a bit different. A CDN is a service that replicates said static content to multiple locations on high-performant servers, allowing the content to always be served from close to to the client. Where DNS comes in is that Anycast is probably used, and cdn.cloudflare.com actually resolves to different IPs depending on where the DNS query is made from.

There's also the chance that I don't actually know what I'm talking about, but luckily someone will most likely correct me if that's the case. :)

Did a bit of research and found out the feature is available on Fennec F-Droid too via about:config.

Here's how to enable: https://community.mozilla.org/en/campaigns/firefox-cookie-banner-handling/

I'm in the same boat. I use Cloudflare email routing to route mail for my domain to Gmail. That covers the inbound email, CF routing provides a catch-all option and you can direct individual addresses to different inboxes.

For outbound, just use any provider that gives SMTP for custom domain. I used Zoho for a while, recently went back to running my own server for outbound. In Gmail web interface, you can add other addresses to send mail as using external SMTP servers.

All of this is of course not very good privacy-wise. Both CF and Google can read your mail... But putting that aside, the setup works really well. You can get your custom domain to Gmail with the price of a cheap email service. Zoho is around 10€ / year, but you could even use something like Amazon SES, I understand with low volumes it's practically free.

I thought about just forwarding from my own MX to Gmail, but that may cause problems if spam gets forwarded. SPF + DKIM setup is simple for traditional use, but forwarding all mail requires the original headers included in the fw mails, seemed like CF probably knows how to handle that better than me.

I wish I knew about Photon before. Just spun up my own instance and loving it!

Devuan is more stable

So Devuan has even older versions of packages than Debian? Stability in the distro context means that features, APIs, UIs don't change. Please don't mix software bugs with stability.

It may be I've entirely misunderstood how systemd works, but I think your description of it is off by a mile too.

but a different init starts a new process ID for each separate program

Of course there are PIDs with systemd too! First of all, systemd itself has a PID (1).

For systemd, which runs system wide to handle everything, if one program locks, systemd has to make adjusts for the whole system to fix the problem.

This is just wrong... Sure, if the service in question is dependent on a lot of other services, or vice versa. If your programs tend to lock, that's the application's fault and should be handled at the application level.

I found Artix to run smoother or lighter than Arch.

This is most definetly a difference in what else is running on the system. Systemd doesn't really use that much resources. Unless you are measuring RAM usage in the megabytes. Which is of course valid on constrained systems, but on a regular desktop one browser tab will need orders of magnitude more resources than any init system.

I want Firefox running an isolated process from the one that Plasama desktop is running

This just shows you have absolutely no clue on Linux processes, I really really doubt anyone is running Firefox under systemd. And neither have you.

There are valid reasons for choosing a different init system, but you have not provided a single one that is really true. It seems like you are only repeating things heard from some one else.

The difference is systemd is one thing to handle everything

This is true, but it refers to systemd handling a lot more than process management. Systemd has the problem that nowadays it does log management, memory management, login management, user management etc. This goes against the UNIX philosophy of one tool for one job, and THAT is why people frown on systemd.

So Chrome and Google VPN? /s