restlessyet

@restlessyet@discuss.tchncs.de
0 Post – 14 Comments
Joined 1 years ago

I'm surprised no one mentioned ansible yet. It's meant for this (and more).

By ssh keys I assume you're talking about authorized_keys, not private keys. I agree with other posters that private keys should not be synced, just generate new ones and add them to the relevant servers authorized_keys with ansible.

3 more...

I'm using organizr, it embeds your apps in tabs/iframes and allows you to configure them in the UI.

4 more...

It depends on your usage. If you are downloading hundreds of GB per month or more, a block account does not make sense.

Personally I get almost everything off torrents, so I also have some Block accounts which last me many years for the occassional use.

guys are not downloading enough 😅

User statistics

All-time upload: 143.678 TiB

All-time download: 112.403 TiB

All-time share ratio: 1.27

no, its personal preference

From your description I would gues that the affected trackers have some rate or connection limits, and your qbittorrent announces are exceeding them. try setting a higher announce interval, like 1+ hours

1 more...

750gb upload, 5tb download per day. However this seems to be another limit, maybe file based max sharing or something.

It matters only if "the docker hosts external IP" your dns resolves is a public IP. In that case packets travel to the router which needs to map/send them back to the docker hosts LAN IP (NAT-Reflection). With cgnat this would need to be enabled on the carrier side, where you set up the port forwarding. If that's not possible, split-DNS may be an alternative.

If "the docker hosts external IP" is actually your docker hosts LAN IP, all of that is irrelevant. Split-DNS would accomplish that.

I ran into the same problem some months ago when my cloud backups stopped being financially viable and I decided to recycle my old drives. For offline backups mergerfs will not work as far as I understand. Creating tar archives of 130TB+ also doesnt sound like a good option. Some of the tape backup solutions looked to be possible options, but are often complex and use special archive formats...

I ended up writing my own solution in python using json state files. It's complete enough to run the backup, but otherwise very work-in-progress with no restore at all. So I do not want to publish it.

If you find a suitable solution I am also very interested 😅

thank you, just subsribed

Do you want to supplement your usenet sources? If so, what type of content/language are you looking for? Do you have any proof like screenshots of the old times left?

Things that should be answered before anyone would consider giving away an invite, because that is also a liability.

Are you hosting behind NAT / at home? If so, you may need to enable NAT reflection on your router.

2 more...

I guess your OPNSense rule from Edit3 is not working because the source is not your mailu instance, because connections are initiated from the outside and mailu only answers (TCP ACK). So you have asynchornous routing.

You may get this working if you set the "reply-to" option to the wg gateway on the firewall rule that allows VPS -> wg -> mailu traffic.

However there is a much cleaner solution using the PROXY protocol, which mailu seems to support: https://mailu.io/master/reverse.html

They are using traefik, but nginx also supports the PROXY protocol.

If you only want online file storage and sync, you may want to try Seafile. It's a lot faster and has been rock solid since 10+ years for me. Not viable if you need some of the many nextcloud exentions though