Pyrosis

@Pyrosis@lemmy.world
0 Post – 59 Comments
Joined 3 months ago

How about defense against dhcp option 121 changing the routing table and decloaking all VPN traffic even with your kill switch on? They got a plan for that yet? Just found this today.

https://www.leviathansecurity.com/blog/tunnelvision

5 more...

Pretty much this it gets it's own folder and in jellyfin it's own library. You just give mom access to this and whatever else you want to. you unselect that library for everyone else. The setting is under users. It's straightforward and is a check mark based select. You probably have it set to all libraries right now. Uncheck that and you can pick and choose per user.

Honestly at this point that is docker and docker compose.

As to what to run it on that very much depends on preference. I use a proxmox server but it could just as easily be pure Debian. A basic webui like cockpit can make system management operations a bit more simplified.

6 more...

To most of your comment I completely agree minus the freedom for choosing different disk sizes. You absolutely can do that with btrfs or just throwing a virtual layer on top of some disks with something like mergerfs.

1 more...

Firewall and deciding on an entry point for system administration is a big consideration.

Generating a strong unique password helps immensely. A password manager can help with this.

If this is hosting services reducing open ports with something like Nginx Proxy Manager or equivalent. Tailscale and equivalent(wire guard, wireguard-easy, headscale, net bird, and net maker) are also options.

Getting https right. It's not such a big deal if all the services are internal. However, it's not hard to create an internal certificate authority and create certs for services.

If you have server on a VPS. Firewall is again your primary defense. However, if you expose something like ssh fail2ban can help ban ips that make repeated attempts to login to your system. This isn't some drop in replacement for proper ssh configuration. You should be using key login and secure your ssh configuration away from password logins.

It also helps if you are using something like a proxy for services to setup a filter list. NPM for example allows you to outright deny connection attempts from specific IP ranges. Or just deny everything and allow specific public IPs.

Also, if you are using something like proxmox. Remember to configure your services for least privileges. Basically the idea being just giving a service what it needs to operate and no more. This can encompass service user/group names for file access ect.

All these steps add up to pretty good security if you constantly assess.

Even basic steps in here like turning on the firewall and only opening ports your services need help immensely.

Just a little heads up about multiple USB drives. They kinda suck sharing on the hub and raids tend to destroy them because of the way they "share" bandwidth on the hub.

To avoid this problem one solution is a USBc to SATA enclosure. The idea being the enclosure having a SATA controller and a few SATA ports you can plug in a few drives. You would be avoiding the multi USB port "sharing" issue. The enclosure would have all the usb hub bandwidth and the hub wouldn't be switching around between ports.

I learned this little bit of info messing with zfs and a few different types of flash media. In the end the most stable connection less prone to error was a single USB connection. However, it didn't matter if it was a single drive or a multi drive enclosure.

Today I wouldn't recommend doing this at all. However if you are going to. Have a look at how USB port sharing on a usb hub works and how that can wreck a raid system over time.

Edit Spelling

3 more...

Hardware support can be a bit of an issue with bsd in my experience. But if you're asking for hardware it doesn't take as much as you may think for jellyfin.

It can transcode just fine with Intel quic sync.

So basically any moden Intel CPU or slightly older.

What you need to consider more is storage space for your system and if your system will do more than just Jellyfin.

I would recommend a bare bones server from super micro. Something you could throw in a few SSDs.

If you are not too stuck on bsd maybe have a look at Debian or proxmox. Either way I would recommend docker-ce. Mostly because this particular jellyfin instance is very well maintained.

https://fleet.linuxserver.io/image?name=linuxserver/jellyfin

1 more...

Have a look at Stirling PDF. It's a self hosted alternative to most if not all Adobe functions that she might care about. It can be setup with docker.

https://github.com/Stirling-Tools/Stirling-PDF

What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

I ask because...

The "file lock ESTALE" error in the context of NFS indicates that the file lock has become "stale." This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.

5 more...

Are you using tvheadend and their jellyfin plugin? Asking out of curiosity.

https://github.com/tvheadend/tvheadend

Anyway Plex and emby come to mind.

This is a journey that will likely fill you with knowledge. During that process what you consider "easy" will change.

So the answer right now for you is use what is interesting to you.

Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

Just remember modern CPUs can host many services from a single box. How they do that can vary.

Probably these directories...

/tmp /var/tmp /var/log

Two are easy to migrate to tmpfs if you are trying to reduce disk writes. Logs can be a little tricky because of the permissions. It is worth getting it right if you are concerned about all those little writes on an SSD. Especially if you have plenty of memory.

This is filesystem agnostic btw so the procedure can apply to other filesystems on Linux operating systems.

Not without good logs or debugging tools.

You need to know what to observe. You are not going to get the information you are looking for directly from zfs or even system logs.

What I suggest stands. You have to understand the behavior of the USB controller. That information is acquired from researching USB itself.

Now if you intend to utilize something like a USB enclosure you indeed would be better off with something like ext4. However, keep in mind that this effect is not directly a file system issue. It's an issue with how USB controllers interact with file systems.

That has been my experience from researching this matter. ZFS is simply more sensitive.

In my experience even for motherboards that have port limitations it's possible to take advantage of pci lanes and install a hba with an onboard SATA controller. They also make pci devices that will accept nvme drives.

Good luck with your experimentation and research.

Of course but you don't control rogue dhcp servers some asshat might plug in anywhere else that isn't your network

Nothing but love for that project. I've been using docker-ce and docker-compse. I had portainer-ce but just got tired of it. It's easier for me to just make a compose file and get things working exactly like I want.

This takes a degree of understanding of what you are doing and why it fails.

I've done some research on this myself and the answer is the USB controller. Specifically the way the USB controller "shares" bandwidth. It is not the way a sata controller or a pci lane deals with this.

ZFS expects direct control of the disk to operate correctly and anything that gets in between the file system and the disk is a problem.

I the case of USB let's say you have two USB - nvme adapters plugged in to the same system in a basic zfs mirror. ZFS will expect to mirror operations between these devices but will be interrupted by the USB controller constantly sharing bandwidth between these two devices.

A better but still bad solution would be something like a USB to SATA enclosure. In this situation if you installed a couple disks in a mirror on the enclosure... They would be using a single USB port and the controller would at least keep the data on one lane instead of constantly switching.

Regardless if you want to dive deeper you will need to do reading on USB controllers and bandwidth sharing.

If you want a stable system give zfs direct access to your disks and accept it will damage zfs operations over time if you do not.

1 more...

Yup and negligible. If I'm forced to contend with a windows environment bitlocker is utilized.

I also utilize a ram disk in a windows os. Imdisk in windows. I migrate temp files and logs into the ram disk. It saves on disk writes and increases privacy.

If pretty straightforward to encrypt if utilizing Linux right from install time.

As for my server I too utilize nextcloud. However, the nextcloud data is on a zfs dataset. This dataset is encrypted.

I did this by installing nextcloud from docker running within a proxmox container. That proxmox lxc container has the nextcloud dataset passed into it.

It depends on your needs. It's entirely possible to just format a bunch of disks as xfs and setup some mount points you hand to a union filesystem like mergerfs or whatever. Then you would just hand that to proxmox directly as a storage location. Management can absolutely vary depending how you do this.

At its heart it's just Debian so it has all those abilities of Debian. The web UI is more tuned to vm/lxc management operations. I don't really like the default lvm/ext4 but they do that to give access to snapshots.

I personally just imported an existing zfs pool into proxmox and configured it to my liking. I discovered options like directly passing datasets into lxc containers with lxc options like lxc.mount.entry

I recently finished optimizing my proxmox for performance in regards to disk io. It's modified with things like log2ram, tmpfs in fstab for /tmp and /var/tmp, tcp congestion control set to cubic, a virtual opnsense heavily modified for 10gb performance, a bunch of zfs media datasets migrated to one media dataset and optimized for performance. Just so many tweaks and knobs to turn in proxmox that can increase performance. Folks even mention docker I've got it contained in an lxc. My active ram usage for all my services down to 7 gigs and disk io jumping .9 - 8%. That's crazy but it just works.

So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I'm more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to "passthru" as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn't import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn't directly managing the disks. It was managing virtual disks.

Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

As for the error it's typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


Repost to op as op claims his comments are being purged

Oh then definitely tvheadend. You can run the server lots of ways even docker. Also has plugin support.

It's the production vs development issue. My advice is the old tech advice. "If it's not broken don't try to fix it"

Modified into a separate proxmox development environment. Btw proxmox is perfect for this with vm and container snapshots.

When you get a vm or container in a more production ready state then you can attempt migrations. That way the users don't kill you :)

Have you considered the increase in disk io and that hypervisor prefer to be in control of all hardware? Including disks...

If you are set on proxmox consider that it can directly share your data itself. This could be made easy with cockpit and the zfs plugin. The plugin helps if you have existing pools. Both can be installed directly on proxmox and present a separate web UI with different options for system management.

The safe things here to use are the filesharing and pool management operations. Basically use the proxmox webui for everything it permits first.

Either way have fun.

8 more...

Yup you can. In fact you likely should and will probably find yourself improving disk io dramatically compared to your original thoughts doing this. It's better in my opinion to let the hypervisor manage disks operations. That means in my opinion it should also share files with smb and NFS especially if you are already considering nas type operations.

Since proxmox supports zfs out of the box along with btrfs and even XFS you have a myriad of options. You combine that with cockpit and you have a nice management interface.

I went the zfs route because I'm familiar with it and I appreciate it's native sharing options built into the filesystem. It's cool to have the option to create a new dataset off the pool and directly pass it into a new lxc container.

4 more...

Bookmark this if you utilize zfs at all. It will serve you well.

https://jrs-s.net/2018/08/17/zfs-tuning-cheat-sheet/

You will be amused with zfs performance in proxmox due to all the tuning that is possible. If this is going to be an existing zfs pool keep in mind it's easier to just install proxmox with the zfs option and let it create a zfs rpool during setup. For the rpool tweak a couple options. Make sure ashift is at least 12 during the install or 13 if you are using some crazy fast SSD as proxdisk for the rpool.

It needs to be 12 if it's a modern day spinner and probably a good setting for most ssds. Do not go over 12 if it's a spinning disk.

Now beyond that you can directly import your existing zfs pool into proxmox with a single import command. Assuming you have an existing zfs pool.

In this scenario zfs would be fully maintaining disk operations for both an rpool and a media pool.

You should consider tweaking a couple things to really improve performance via the guide de I linked.

Proxmox vms/zvols live in their own dataset. Before you start getting to crazy creating vms make sure you are taking advantage of all the performance tweaks you can. By default proxmox sets a default record size for all datasets to 128k. qcow2, raw, and even zvols will benefit from record size of 64k because it tends to improve the underlying filesystem performance of things like ext4, XFS, even UFS. Imo it's silly to create vm filesystems like btrfs if you're vm is sitting on top of a cow filesystem.

Another huge improvement is tweaking the compression algorithm. lz4 is blazing fast and should be your default go to for zfs. The new one is pretty good but can slow things down a bit for active operations like active vm disks. So make sure your default compression is lz4 for datasets with vm disks. Honestly it's just a good default to specify for the entire pool. You can select other compressions for datasets with more static data.

If you have a media dataset full of files like music, vids, pics. Setting a record size of 1mb will heavily improve disk io operations.

In proxmox it will default to grabbing half of your memory for arc. Make sure you change that after install. It's a file that defines arc_max in byte number format. Set the max to something more reasonable if you have 64 gigs of memory. You can also define the arc_min

Some other huge improvements? If you are using an SSD for your proxmox install I highly recommend you install log2ram on your hypervisor. It will stop all those constant log writes on your SSD. It will also sync them to disk on a timer and shutdown/reboot. It's also a huge performance and SSD lifespan improvement to migrate /tmp and /var/tmp to tmpfs

So many knobs to turn. I hope you have fun playing with this.

Another thing to keep in mind with zfs is underlying vm disks will perform better if the zfs pool is a type of mirror or stripe of mirrors. Z1 Z2 type pools are better for media and files. Cm disk io will improve on the mirror type style dramatically. Just passing what I've learned over time in optimizing systems.

I noticed some updates on live video streaming. I do wonder if that will help in how jellyfin interepts commercial breaks.

Let's say I have an m3u8 playlist with a bunch of video streams. I've noticed in jellyfin when they go to like a commercial the stream freaks out. It made me wonder if the player just couldn't understand the ad insertion.

Anyway wonderful update regardless and huge improvement.

Usually this comes down to resource and energy efficiency. While a vm works perfectly fine you will find you can share video and storage resources in efficient ways with lxc.

For example you can directly pass a zfs dataset into a lxc with a simple lxc.mount.entry:

This would allow you to configure options like cluster size, atime, compression algorithm, xattr, etc.. without much overhead.

It's also nice to know you can share your GPU with multiple lxc without it being locked into a single vm.

At its core cockpit is like a modern day webmin that allows full system management. So yes it can help with creating raid devices and even lvms. It can help with mount points and encryption as well.

I do know it can help share whatever with smb and NFS. Just have a look at the plugins.

As for proxmox it's just using Debian underneath. That Debian already happens to be optimized for virtualization and has native zfs support baked in.

https://cockpit-project.org/applications

My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.

I like to utilize nginx proxy manager alongside docker-ce and portainer-ce.

This allows you to forward web traffic to a single internal NPM IP. As for setting up the service ips. I like to utilize the gateway ips that docker generates for each service.

If you have docker running on the same internal IP as NPM you can directly configure the docker gateway ips for each service within the NPM web configuration.

This dumps the associated traffic into the container network for another layer of isolation.

This is a bit of an advanced configuration but it works well for my environment.

I would just love some support for quic within NPM.

If you are somewhat comfortable with the cli you could install proxmox as zfs then create datasets off the pool to do whatever you want. If you wanted a nicer gui to manage zfs you could also install cockpit on the proxmox hypervisor directly along with the zfs plugin to manage the datasets and share them a bit easier. Obviously you could do all of that from the command line too.

Personally I use proxmox now where before I made use of Debian. The only reason I switched was it made vm/lxc management easy. As for truenas it's also basically Debian with a different gui. These days I'm more focused on optimization in my home lab journey. I hope you enjoy the experience however you begin and whatever applications you start with.

Hmm. If you are going to have proxmox managing zfs anyway then why not just create datasets and share them directly from the hypervisor?

You can do that in terminal but if you prefer a gui you can install cockpit on the hypervisor with the zfs plugin. It would create a separate web gui on another port. Making it easy to create, manage, and share datasets as you desire.

It will save resources and simplify zfs management operations if you are interested in such a method.

I'm specifically referencing this little bit of info for optimizing zfs for various situations.

Vms for example should exist in their own dataset with a tuned record size of 64k

Media should exist in its own with a tuned record size of 1mb

lz4 is quick and should always be enabled. It will also work efficiently with larger record sizes.

Anyway all the little things add up with zfs. When you have an underlying zfs you can get away with more simple and performant filesystems on zvols or qcow2. XFS, UFS, EXT4 all work well with 64k record sizes from the underlying zfs dataset/zvol.

Btw it doesn't change immediately on existing data if you just change the option on a dataset. You have to move the data out then back in for it to have the new record size.

3 more...

Keep in mind it's more an issue with writes as others mentioned when it comes to ssds. I use two ssds in a zfs mirror that I installed proxmox directly on. It's an option in the installer and it's quite nice.

As for combating writes that's actually easier than you think and applies to any filesystem. It just takes knowing what is write intensive. Most of the time for a linux os like proxmox that's going to be temp files and logs. Both of which can easily be migrated to tmpfs. Doing this will increase the lifespan of any ssd dramatically. You just have to understand restarting clears those locations because now they exist in ram.

As I mentioned elsewhere opnsense has an option within the gui to migrate tmp files to memory.

You are very welcome :)

It looks like you are using legacy bios. mine is using uefi with a zfs rpool

proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)

However, like with everything a method always exists to get it done. Or not if you are concerned.

If you are interested it would look like...

Pool Upgrade

sudo zpool upgrade 

Confirm Upgrade

sudo zpool status

Refresh boot config

sudo pveboot-tool refresh

Confirm Boot configuration

cat /boot/grub/grub.cfg

You are looking for directives like this to see if they are indeed pointing at your existing rpool

root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet

here is my file if it helps you compare...

#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#

### BEGIN /etc/grub.d/000_proxmox_boot_header ###
#
# This system is booted via proxmox-boot-tool! The grub-config used when
# booting from the disks configured with proxmox-boot-tool resides on the vfat
# partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
# /boot/grub/grub.cfg is NOT read when booting from those disk!
### END /etc/grub.d/000_proxmox_boot_header ###

### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
  set have_grubenv=true
  load_env
fi
if [ "${next_entry}" ] ; then
   set default="${next_entry}"
   set next_entry=
   save_env next_entry
   set boot_once=true
else
   set default="0"
fi

if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
else
  menuentry_id_option=""
fi

export menuentry_id_option

if [ "${prev_saved_entry}" ]; then
  set saved_entry="${prev_saved_entry}"
  save_env saved_entry
  set prev_saved_entry=
  save_env prev_saved_entry
  set boot_once=true
fi

function savedefault {
  if [ -z "${boot_once}" ]; then
    saved_entry="${chosen}"
    save_env saved_entry
  fi
}
function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

if loadfont unicode ; then
  set gfxmode=auto
  load_video
  insmod gfxterm
  set locale_dir=$prefix/locale
  set lang=en_US
  insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
  set timeout=30
else
  if [ x$feature_timeout_style = xy ] ; then
    set timeout_style=menu
    set timeout=5
  # Fallback normal timeout code in case the timeout_style feature is
  # unavailable.
  else
    set timeout=5
  fi
fi
### END /etc/grub.d/00_header ###

### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=cyan/blue
set menu_color_highlight=white/blue
### END /etc/grub.d/05_debian_theme ###

### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
        set gfxpayload="${1}"
}
set linux_gfx_mode=
export linux_gfx_mode
menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
        load_video
        insmod gzio
        if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
        insmod part_gpt
        echo    'Loading Linux 6.5.13-5-pve ...'
        linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
}
submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.13-5-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.13-5-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.11-8-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
        }
        menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
                load_video
                insmod gzio
                if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                insmod part_gpt
                echo    'Loading Linux 6.5.11-8-pve ...'
                linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                echo    'Loading initial ramdisk ...'
                initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
        }
}

### END /etc/grub.d/10_linux ###

### BEGIN /etc/grub.d/20_linux_xen ###

### END /etc/grub.d/20_linux_xen ###

### BEGIN /etc/grub.d/20_memtest86+ ###
### END /etc/grub.d/20_memtest86+ ###

### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###

### BEGIN /etc/grub.d/30_uefi-firmware ###
menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
        fwsetup
}
### END /etc/grub.d/30_uefi-firmware ###

### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###

### BEGIN /etc/grub.d/41_custom ###
if [ -f  ${config_directory}/custom.cfg ]; then
  source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
  source $prefix/custom.cfg
fi
### END /etc/grub.d/41_custom ###

You can see the lines by the linux sections.

most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

I would never utilize 1mb for any dataset that had vm disks inside it.

I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.

It's definitely encrypted they can just tell by signature that it is wireguard or whatever and block it.

They could do this with ssh if they felt like it.

Usually a reverse proxy runs behind the firewall/router. The idea you are pointing 80/443 at the proxy with port forwarding once traffic hits your router.

So if someone goes to service.domain.com

You would have dynamic DNS telling domain.com the router is the IP.

You would tell domain.com that service.domain.com exists as a cname or a record. You could also say *.domain.com is a cname. That would point any hosttname to your router.

From here in the proxy you would say service.domain.com points to your services IP and port. Usually that is would be on the lan but in your case it would be through a tunnel.

It is possible and probably more resource efficient to just put the proxy on the VPS and point your public domain traffic directly at the VPS IP.

So you could say on the domain service.domain.com points to the VPS IP as an a record. Service2.domain.com points to the VPS IP as another a record.

You would allow 80/443 on the VPS and create entries for the services

Those would look like the service.domain.com pointing to localhost:port

In your particular case I would just run the proxy on the public VPS the services are already on.

Don't forget you can enable https certificates when you have them running. You can secure the management interface on its own service3.domain.com with the proxy if you need to.

And op consider some blocklists for your vps firewall like spamhaus. It wouldn't hurt to setup fail2ban either.

Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.

1 more...