tvcvt

@tvcvt@lemmy.ml
0 Post – 85 Comments
Joined 1 years ago

I can’t give direct experience here, but this is exactly the use case I’ve been meaning to spin up mailpiler for: https://www.mailpiler.org/. One of these days that will rise to the top of the priority list.

I think it’s just a matter of getting used to it. I had the same issue at first and the more I used the command line, the more I started to prefer it to GUI apps for certain tasks.

A couple things that I use all the time:

  • tab completion is incredible
  • cd - goes back to the last directory you were in (useful for bouncing back and forth between locations)
  • !$ means the last argument. So if you ls ~/Downloads and then decide you want to go there, you can cd !$.
  • :h removes the last piece of a path. So I can do vim /etc/network/interfaces and then cd !$:h will take me to /etc/network.
12 more...

Who says you can only get one? Don’t let the perfect be the enemy of the good; just get one of the fun ones you already came up with and in the future if you need a different one get that too. That’s been my approach, anyway.

Sorry to say I’ve never heard of spaceship, but wanted to make sure you know that Cloudflare now has a registrar service, so if you’re already using them for DNS, that might be worth a look for you.

11 more...

I’m a huge Debian fan, but I’d say everyone should give openSUSE a shot. It’s a well thought out distro that doesn’t get enough love.

It sounds like you’re seeing a few different issues here and it makes me wonder if there’s some hardware issue that’s causing some of this or if the installation is botched (though it’s be odd for that to hose two different distros.

Last time I looked Debian didn’t include sudo by default, so you’d have to install it first. To add yourself to the sudoers group, log in as root and run usermod -aG sudo mariah (assuming that’s your username). Then reboot (logging out your user should work too, but better be thorough).

Grub sometimes includes a timeout longer than I like and you can edit that in the /etc/default/grub file to something of your liking.

Not sure what you mean about the commands, but maybe it’s an issue with your $PATH.

1 more...

My only experience with homebrew is on macOS and I’ve switched to MacPorts there. Homebrew did some weird permissions things I didn’t care for (chowned all of /usr/local to $USER, if I’m remembering right). It worked fine on a single user system, but seemed like a bad philosophy to me. This was years ago and I don’t know how it behaves on Linux.

I also prefer Firefox, but when I need a Chromium alternative for testing, I opt for the flatpak (or the snap) version personally.

3 more...

How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.

TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.

Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.

3 more...

I second mailcow. It’s what I’ve been using for years and it’s pretty great.

One thing I’ll add is before you take the plunge, make sure your VPS address isn’t on a block list somewhere. Pay a visit to mxtoolbox.com and you should find some resources there.

You can do this with something like Nextcloud. Just set up a folder shared by a link and you’re able to make it a drop box of sorts that anyone can upload to.

Obviously, be careful allowing arbitrary uploads from the whole internet. I’d set a time limit on the share so people can’t upload junk forever.

5 more...

Hey, as others have said, you can definitely set up OPNSense in a VM and it works great. I wanted to take a second and answer the first part of your question: it cannot run in Docker. Containers in Docker share their kernel with the Linux host machine. Since OPNSense isn’t a Linux distribution (it’s based on FreeBSD), it can’t make use of the shared Linux kernel.

My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.

1 more...

You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.

I’ve not done much with podman, but my first thought is that port 53 is privileged and usually podman runs as a non-privileged user, right? Do you have some mechanism in place that would allow podman to use port 53?

I’m a big fan of TrueNAS and Proxmox and I think OMV will be great for you.

In the order you asked:

  • I think OMV is a decent choice, but there isn’t really a bad choice, just better fits for personal tastes.
  • The upside compared to vanilla Debian or Ubuntu is a solid web-UI for management (though you could get that in the form of Cockpit) and a complete system philosophy. The downside is less flexibility. Any system someone else makes locks you into doing things their way.
  • If you don’t have a desire to run VMs or set up clusters, or have by-default ZFS on root, you won’t be missing anything.
1 more...

I haven’t seen any deleterious effects from what you’re proposing and I haven’t heard of any either. Were it me, I’d go for it.

To amplify RedWeasel’s very good answer, fstab runs as root and unless you specify otherwise, the share will mount with root as the owner on the local machine. From the perspective of the Samba server, it’s the Jellyfin user accessing the files, but on the local machine, but local permissions come into play as well. That’s why you can get at the files when you connect to the share from Dolphin in your KDE system—it’s your own user that’s mounting the share locally.

So I think the way I would want to do this is with something like mailpiler (https://www.mailpiler.org/). It’s been on my long list of things to dive into for a while.

10 more...

You’ve got some decent answers already, but since you’re getting interested in ZFS, I wanted to make sure you know about discourse.practicalzfs.com. It’s the successor to the ZFS subreddit and it’s a great place to get expert advice.

You could likely use dd or clonezilla to create a duplicate of your boot drive and boot your laptop right from that, but that’s not quite what you’re after.

There are some distros lately that use a declarative config file to set the whole thing up that I think is much more what you have in mind. The big ones that come up a lot are nixOS and Fedora Silverblue. Maybe one of those systems would be to your liking.

1 more...

Another option that’s pretty much perfect as long as you don’t need to provide remote support for macs is Remotely (https://github.com/immense/Remotely). You can selfhost it and it works kind of like teamviewer, so pretty simple from the client standpoint.

If you want an image, it doesn’t matter what the underlying file system is. You should be able to use a tool like Clonezilla and get a 1:1 copy. Depending how you’ve set up partitioning, you could also use sgdisk to set up the proper partitions and zfs send/recv for the new data portion of the drive and install a boot loader. That’s probably the way I’d go in this instance.

Depends what you’re after, but I’ve always been partial to gparted live.

I administer a handful of FreePBX systems that run pretty smoothly and are relatively friendly to use. Crosstalk Solutions on YouTube has a bunch of videos on the software if you want to get up to speed about how everything works.

There’s not a lot of information to go on here, but my first thought is that you haven’t configured your VPN to route to the local network. So, while you may be getting a connection to the VPN server, your computer doesn’t know where to send traffic for Cockpit.

There is usually a way to push those routes to the client from your con server.

4 more...

It’s been on my agenda for a while to set up a Matrix server with an iMessage bridge with the idea I could interact with all of my message protocols from one place. I haven’t gotten around to it, but it might be worth a look.

My use of Mikrotik is somewhat limited, but I’m testing I’ve found routing between VLANs to be pretty performant. The key is to offload that routing to the hardware, which not all configurations allow. Check out the Network Berg’s YouTube channel and you should get a good idea.

If you’re looking for something more or less in the same footprint, I understand those cheap Wyze cameras can be used. There are alternative firmwares available that can be flashed to them to open up the rtsp stream to whatever self-hosted recorder you’d like. Haven’t tried it, but have heard it mentioned on the Self Hosted podcast.

Not my reply, but I’ve also had mixed tests playing with Netmaker. It’s a project I really want to like, but getting clients to work together is sometimes finicky. It’s a young project, so maybe the kinks will get worked out. I do like the admin UI.

I keep my dotfiles in a got repo and just do a git pull your update them. That could definitely be a cron job if you needed.

SSH keys are a little trickier. I’d like to tell you I have a unique key for each of my desktop machines since that would be best practice, but that’s not the case. Instead I have a Syncthing shared folder. When I get around to cleaning that up, I’ll probably do just that and keep an authorize_keys and known_hosts file in git so I can pull them to needed hosts and a cron job to keep them updated.

I’ve been meaning for years to set up a solid archiving system that I don’t have to manually babysit. I’ve had my eye on mailpiler (https://www.mailpiler.org/), but haven’t found the time to get up to speed on it. I’m the meantime, I drag messages to a local folder like a barbarian.

There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (https://discourse.practicalzfs.com/t/hard-drives-in-zfs-pool-constantly-seeking-every-second/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small volblocksize that PVE uses to make zvols for its Vans under ZFS.

If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.

Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.

I go a couple different routes: I have a Mailcow instance on a VPS for my personal email. For my business I use Zoho, which has been wonderful. Their basic plan is $1 a month per user and it should have all the features you're looking for.

I’d second this. I’ve installed Proxmox installed on some Mac Minis and they do a credible job of it. A beefy Max Pro would be all the better.

I’ll add that if the main purpose is to be a NAS something like TrueNAS will be much more set-and-forget.

This is grossly overpowered for a firewall, so I wouldn’t go that route unless you want to do a virtual firewall on top of a general purpose hypervisor.

This is exactly what I use as well. It’s pretty awesome. Backup and restore work like a charm.

This really sounds like a problem with the default route. What’s the output of ip route? That should give us some hints about what’s up.

2 more...

That’s awesome — have a great time playing around with it.

I completely agree with this. Seems like a stellar use for either Cloudflare Tunnels or Tailscale’s similar Funnel feature.

Connect it only to the gramos deployment and that will be the only piece of your setup available publicly.

On the crazy low-scale end, I have a no-name dash cam that I found for $5 on a tchotchkes table at my local Chinese takeout place. It works perfectly with my Linux desktop—both reading from the SD card and streaming directly via USB. Not at all what you’re looking for, but it makes me think that if this random junk works, more mainstream devices probably do too.

I’ve got one running in a Proxmox cluster. Getting it setup was a bit particular (due to the T2 chip if I remember correctly), but it’s be working flawlessly. I use the quick sync feature of the iGPU for my jellyfin container.

If you were going to buy something new, I think there are more cost effective boxes of about the same size and spec, but if you’ve got it already, you should definitely start playing with it.