ssdfsdf3488sd

@ssdfsdf3488sd@lemmy.world
1 Post – 36 Comments
Joined 10 months ago

Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

Portability and backup are dead simple.

Get rid of iscsi. Instead, use truenas scale for nas and use a zvol on truenas to run a vm of proxmox backup server. Run proxmox on the other box with local vms and just backup the vms to proxmox backup server at a rate you are comfortable with (i.e. once a night). Map nfs shares from truenas to any docker containers directly that you are running on your vms. map cifs shares to any windows vms, map nfs shares directly to any linux things. This is way more resilient, gets local nvme speeds for the vms and still keepa the bulk of your files on the nas, while also not abusing your 1gbit ethernet for vm stuff, just for file transfer (the vm stuff happens at multi GB speeds on the local nvme on the proxmox server).

2 more...

In firefox on android i just flip the switch to request desktop site amd its mostly fine...

I started on planka but ended on vikunja, it was just a lot nicer and more flexible for my needs.

That's the dude who was butt hurt about something this dude did: https://github.com/iamadamdev/bypass-paywalls-chrome

and so forked it and arguably does a better job, lol.

Thats not the feature i would port to paperless.paperless needs an o counter lol.

just fyi, direct streaming isn't really direct streaming as you may think of it if you have specified samba shares on your nas instead of something on the vm running jellyfin. it will still pull from the nas into jellyfin and then http stream from jellyfin, whihc is super annoying.

2 more...

It's really easy with headscale so I assume it must be really easy with tailscale too. How I did it was I created tiny tailscale vm to advertise the route to the ips I wanted access to on my internal lan. Then I shared the nfs share with the ip of that subnet router. now everything on my headscale network looks like it's coming from the subnet router and it works no problem (Just remember you have it setup this way in case you ever expand your userbase, as this is inherently insecure if there is anything connected to your tailscale that you don't want to have full access to your nfs shares)

Just came here to say this, it workson a 10 dollar a year racknerd vps for me no problem. Matrix chugs on my much bigger vps, although it is sharing that with a bunch of other things, overall it should have mich more resources.

I used to host anonaddy, I don't have the docker compose or configs anymore but I don't remember it being that bad. I stopped a couple years ago because simplelogin became included with my vpn subscription (and then I found fastmail, which has a similar feature built in so I ended up canceling simplelogin and that vpn and going to fastmail and mullvad). I basically just edite their example compose/env files and ran it behind my existing nginxproxymanager setup (that is gone now too, ended up moving to traefik but that's a story for another time). compose example here: https://github.com/anonaddy/docker/tree/master/examples/compose

2 more...

This is what I did too, after self hosting and self hosting anonaddy for a while. I really like how it integrates into bitwarden to give me most of what I liked about anonaddy as an included thing. I also did it ofr the same reason. Too many Eh holes out there that just want to bang on the mail server all day.

I ended up on purelymail.com for my machine sending email (it's dirt cheap I think I will be under their minmimum and it will cost something like 10 dollars a year for unlimited unique email addresses for my services)..

Pretty sure that title is firmly held by mcafe, even now.

Nevee saw that on wireguard once i foind the better connections for my location, weird

That's what I'm using right now. I am kind of curious if you are aware of any apk using tiny operating systems like alpine but that also have systemd? I want to experiement with quadlets/podman but don't really want to lose how simple alpine is to administer and how fast it boots.

Pretty much this. I don't even bother with watchtower anymore. I just run this script from cron pointed at the directory I keep my directories of active docker containers and their compose files:

#/bin/sh for d in /home/USERNAME/stacks/*/ do (cd "$d" && docker compose pull && docker compose up -d --force-recreate) done; for e in /home/USERNAME/dockge/ do (cd "$e" && docker compose pull && docker compose up -d --force-recreate) done;

docker image prune -a -f

I use rss-bridge for the popular stuff but I've found rss-funnel to be nicer for creating my own scrapes (mostly taking rss feeds that link to the website instead of the article and adding a link to the article mentioned on the website (https://github.com/shouya/rss-funnel)

Those are puny mortal numbers.... my backup nas is more than twice that.......

I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product

50 watts is maybe halfof one of my 10 gig switches...

3 more...

dell powerconnect 8164's and arista 7050tx's . House is wired with copper so 10 gig copper is what I have to use and that's power hungry.

yah, my house is wired with copper and 10 gig copper uses a lot of power. It doesn't really help that the new slightly less power hungry 48 port 10 gig switches are thousands of dollars. I'm using 100 to 150ish watts per 10 gig switch to be able to buy the switch for under 500 bucks instead of using 60-100 watts and paying 2-5k per switch...

What is KPW4?

Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.

I would argue it's the correct idea up to a fairly decently sized business. Basically anything where you don't have the budget or the need for super fault tolerant systems (i.e. where it's ok to very rarely have a 20 minute to an hour outage in order to save 50k+ of IT hardware costs). You can take the above and go next step to a high availability proxmox cluster to further reduce potential downtime before you step into the realm of needing vmware and very expensive highly available and fast storage as well. It gets even more true when you start messing around with truenas and differential speed vdevs (i.e build a super fast nvme one with 10-25gig networking for some applications, a cheaper spinning rust one with maybe 10 gig networking for bulk storage. It's also nice that, by using proxmox backup server as a zvol you can take advantage of all the benefit of both zfs replication/snapshotting and cloud (jstor/wasabi s3 bucket, another truenas server at a different location) for that zvol as well as your other data you are sharing as datasets.

I really like truenas for nas but I agree with you on running vms/docker somewhere else. I ended up keeping truenas for the mass storage (the only thing I run on it is one virtual machine to hold proxmox backupserver on an ivol). I think the much better home platform for vms is proxmox. You get ar eally nice gui that makes everything pretty easy, it's debian under the hood and with proxmox backup server you can very easily backup your virtual machines. It's also very easy to mount nfs or cifs shares into docker containers so you can keep the bulk data of your docker environment directly on the nas, which makes managing backups dead simple.

yah, you need an ideally clean static ip because that is what is used for repution stuff like spf/dmarc/dkim I hosted this on a tiny vps

how does this compare to: https://github.com/babybuddy/babybuddy (I used babybuddy for my two kids, it was great)

will it let you do rootless nfs mounts into the container? That's the showstopper for me, as that is by far the best way to just make this all work within the context of my file storage.

virtualize the machine with proxmox, use proxmox backup server, load vm on new system if you get catastrophic failure on the machine running the vm currently.

That's pretty much exactly my story except I went with fastmail.com, mullvad for vpn (you really need to test with some script to find your best exit nodes I forget which one I used ages ago but it found me a couple of nodes about 1000 kms away from my location and in a different country that I can do nearly a gig through routinely.. Maybe it was this script? https://github.com/bastiandoetsch/mullvad-best-server) . I went with pcloud for a bit but tailscale and now currently netbird make it kind of irrelevant since its' so easy to get all my devices able to communicate back to my house file server. I want to like hetzner so bad but every time I try it the latency to north america just kills me and the north american offering was really far away and undeveloped last time Itried it

1 more...

Ha e you looked at dockge? I like it way more than portainer, atleast for single instance. It works with normal compose files so it keeps your stuff a lot more compatible to change and its by the guy who makes uotime kuma.

1 more...

Sorry about that, my reply was from my phone and therefore terrible. Here's the app: https://github.com/louislam/dockge

jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).

dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don't like dockge, you just go back to cli and do your docker compose up -d --force-recreate .

you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here's an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.

`version: '3' services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: "UTC" ports: - "5810:5800" volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: "nfs" o: "addr=NFSSERVERIP,nolock,soft" device: ":NFSPATH"

cleanedaudiobooks: driver_opts: type: "nfs" o: "addr=NFSSERVERIP,nolock,soft" device: ":OTHER NFSPATH" `

Does yours have 8 sata ports or dual external sf8088 ports per chance and moreram?