Is my ZFS partition properly setup?

pe1uca@lemmy.pe1uca.dev to Selfhosted@lemmy.world – 14 points –

I just attached a new volume to my vps and usually I follow the instructions provided using parted and mkfs.ext4 but I decided to try ZFS.

The guides I've found online are all very different and I'm not sure if I did everything correct to know the data will be safe.
What I mean is running lsblk -o name,size,fstype,type,mountpoint shows this

NAME     SIZE FSTYPE   TYPE MOUNTPOINT
vdb      100G          disk
└─vdb1   100G ext4     part /mnt/storage
vdc      100G          disk
├─vdc1   100G          part
└─vdc9     8M          part

You can see the type and mountpoint of the previous volume are listed, but the ZFS' ones aren't.

Still I can properly access the ZFS pool I created and I also already copied some test data.

root@vps:~/services# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
local-zfs   99.5G  6.88G  92.6G        -         -     0%     6%  1.00x    ONLINE  -
root@vps:~/services# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
local-zfs   6.88G  89.5G     6.88G  /mnt/zfs

The commands I ran were these ones

parted -s /dev/vdc mklabel gpt
parted -s /dev/vdc unit mib mkpart primary 0% 100%
zpool create -o ashift=12 -O canmount=on -O atime=off -O recordsize=8k -O compression=lz4 -O mountpoint=/mnt/zfs local-zfs /dev/vdc

Does this look good?
Should I do something else? (like writing something to fstab)

The list of properties is very long, is there any one you recommend I should look into for a simple server where currently non-critical data is stored?
(I already have a separate backup solution, maybe I'll check to update it later)

8

I don't think there's anything intrinsically wrong, but far as I can see you are using only a single disk for the zfs pool, which will give you integrity checks (know when something is corrupted), but no way to fix it.

Since this is, by today's standards, a tiny disk at 100G, I assume this is just a test setup? I'm not sure zfs is particularly well suited for virtual machines, I think it is better to have the host handle the physical data integrity by having the disk image on a zfs filesystem, or giving the VM a zfs volume (block device) directly.

On FreeBSD, since v12 came out, it is now recommended to use zfs everywhere, including on virtual machines. I don't know about linux.

Ubuntu and many other distros do not come with ZFS support out of the box due to licensing, so it is not recommended to use ZFS for the root filesystem.

Ubuntu has ZFS on root as one of the options in the normal graphical installer. I have it running on multiple machines.

This must have changed with 23.04 or something then, because when I set up my home server a little over a year ago, ZFS as root was not only not a part of the install, but also heavily recommended against as something that could be hacked in. Basically you could do it, but you shouldn't was my impression. I ended up doing EXT4 as root, then mounted my ZFS storage in my home directory.

I wouldn't set recordsize so small on a modern system, I left mine at 1M for a system that can store anything from small config files to full movies. Also set dedup=off (I can't remember why now, but I believe it was bad to turn it on when your drive is nearly full?). Another option you might want to look at is xattr, I set this to 'sa' for all my systems because a lot of program use those features now. All of my pools are set up as raid-z2 or mirror, but zfs is an awesome filesys for protecting the integrity of your files.

Otherwise it looks pretty good? Keep an eye on "zpool status" for any bad drives in your pool(s), and hopefully your system added a cron job to scrub your pools once a month.

Dedup is apparently very very resource intensive and not worth the load unless you're expecting 5x+ reductions in space usage

Probably depends though

Oh, for some reason I thought the reason to remove it was that it limited or broke some of the built-in sanity checking to maintain data integrity. I do remember reading in a number of places that it absolutely should be turned off at all times, but it's been awhile since I set up a new zfs pool so I was just going off my notes of what settings I use (and confirmed I still used those on all the current pools).