What is a good multirole server setup for a racked server?

The Stoned Hacker@lemmy.world to Selfhosted@lemmy.world – 19 points –

I recently purchased a Dell PowerEdge R730 at a killer price, and intend it to be the cornerstone of my home lab. I plan to use it as both a NAS and a container server so I can set up whatever I want with it. I'm a bit unsure of what a good setup here looks like, so I'm hoping for a bit of guidance.

As my R730 has 16 drive bays, I intend for 10 of those to be high capacity HDDs for the NAS with the remaining spots for SSDs for the containers. The R730 will also have a PERC H730 RAID controller. I want a full featured NAS solution (although I am open to more lightweight solutions) so my go to thought is TrueNAS. My plan was to install Proxmox and run TrueNAS on top of it, but I am unsure if this is the best method. Does anyone have any insight on how well this works or if there's a cleaner solution?

Addendum: Anyone have any recommendations for RAID setups? I currently have 4x900 GB 10k SAS Dell Enterprise drives but I intend to bump that up to 10x900 GB over time. I'd like to be able to add these without much hassle, but I'm unsure what to go with. It seems that ZFS can handle it well alone, but I don't want to have gotten the good raid controller for nothing so I'm wondering if using ZFS with the RAID controller in HBA mode will be more worth it than a dedicated RAID setup. And if I'm using a RAID setup, should I go RAID or unRAID? If I go RAID, is RAID 01, 10, or 60 a better option here? Based on my research, it sounds like I'll need a lot more drives for a proper RAID setup and it'll be less flexible, but I would like some second opinions.

29

Go always with software RAID where possible to avoid vendor lock-in.

Can you elaborate on the scenario this is solving for? Isn't software RAID a performance hit?

Its cheaper, has better visibility for drive health, and things like CoW means a file is extremely unlikely to be corrupt on a power failure (with hardware raid, you are relying on the battery in the raid controller for that protection. I guess you could run CoW ontop of a hardware raid). CoW also helps spread wear on SSDs.
ZFS will heal data if it finds corrupted blocks, not sure that a hardware raid does.
ZFS is the same anywhere, and is adjusted via software (as opposed to the dell PERCs which i believe require booting into essentially bios. Certainly ive never had the work through iDRAC), and you dont have to learn that raid controllers control UI (altho, they are never difficult).
Its also another part that could fail and require like-for-like replacement. ZFS on satas just needs to be able to access the drive.

I looked into it ages ago, and ZFS on HBA made so much more sense than a $300 used raid controller.

For me only the case of inability to reassemble RAID array on different server (with different controller or even without it) for data recovery shouts a big "NO" to any RAID controller at home lab.

While it is fun to have "industrial grade" thing, it isn't fun to recover data from such arrays. Also, ZFS is a very good filesystem (imagine having 4.8 TB of data on 4 TB mirrored RAID. This is my case with zstd compression), but it isn't playing well with RAID controllers. You'll experience slowdowns and frequent data corruption.

Good to know, I appreciate the help! Do you think ZFS is a reasonable alternative to using RAID here?

Using ZFS on Proxmox for couple of years under different workloads (home servers, productions at job), it is very good.

Just tune it as you need :)

Thanks a ton! I'm on the proxmox forums trying to figure out if I should stick with the H330 that came with the server and return/sell the H730 I got, or if I should use the H730. Seems there's a recent thread where they're figuring it out so I'll get to the bottom of it.

Imo get the H730 if it's financially reasonable. The passthrough is better supported in my experience. You can resell the H330 fairly easily.

Turns out they put the H730 in the server already so I never got an H330. I want to test the SMART data but it looks like the newer firmware should be fine.

Be aware! The dell R730 most likely comes with a raid controller which is not suited for ZFS. You need a true HBA instead. Some raid controllers do let you set them up in JBOD mode but it still is not suited for ZFS as you need a proper HBA and or a raid controller where you can flash the firmware to IT mode.

For ZFS storage and many apps and more Truenas scale might be interesting to you.

I've been reading that the updated firmware for the PERC H730 has no issues in HBA mode, and there's a thread from December in the Proxmox forums on using an H330 and H730 and they seem to work fine. I'm trying to get more clarification in that thread, but I'll also do some testing myself.

My plan was to install Proxmox and run TrueNAS on top of it

Proxmox runs ZFS natively already so there's not much reason to bother with TrueNAS IMO. If you need SMB shares and that sort of thing you can run a container and mount the ZFS volume into it.

I currently have 4x900 GB 10k SAS Dell Enterprise drives but I intend to bump that up to 10x900 GB over time. I’d like to be able to add these without much hassle

If you want to easily add drives later on then as far as I know the only good option is the controller in HBA mode with unRAID in a VM. Hardware RAID or ZFS don't make adding drives very easy.

I’m wondering if using ZFS with the RAID controller in HBA mode will be more worth it than a dedicated RAID setup

I think ZFS RAID with HBA mode on the controller is worth it vs traditional hardware RAID, it's more portable, less reliant on hardware.

And if I’m using a RAID setup, should I go RAID or unRAID? If I go RAID, is RAID 01, 10, or 60 a better option here?

With 10 drives I would probably do ZFS RAIDz2 if this was my setup. (RAIDz2 has 2 parity drives like RAID 6).

Thanks for the really helpful perspective!

I have this exact setup (R730 and ZFS), but I'll have to disagree on not using TrueNAS. There are features you may want to use, and the logical separation of the zfspool from the rest of the server has been handy. I boot and store my VMs off of SSDs outside of the main NAS pool.

If you want to use a NVMe boot drive on a PCIE card, the server isn't natively capable of it. You need to use a USB drive to bootstrap it with Clover. I forget the exact technical details. I have had no problems leaving it in the internal usb port over a couple years so far.

I would strongly suggest not using 900GB 10kRPM drives (and especially not 10 of them) in [current year] when brand-new 8TB hard drives cost $120, and 14+TB recertified drives aren't much more than that. The power costs of 7 more drives than you need for the capacity definitely add up over several years of runtime.

I'm a college student and I already dropped a lot on the server. I haven't gone too deep into my planning for upgrades yet aside from the H730, more RAM when I can afford it, and more drives. I'll take the 8TB drives into consideration though, I'd just have to build that up a lot slower but it'd give me a lot of space. 10x8TB would be fun to have

You don't need 8 drives when they are 8 times larger than your current ones. I went from planning for 5+ drives to just downsizing to two drives in mirror. Then I can expand with another mirror.

Unless you need uptime and want to guarantee an SLA for your own services, you are much better off with a mirror or raidz1. Do regular backups (off-site, incremental) and don't fear the disk failure.

How do you want to access the files? Browser, SMB, NFS, iSCSI, app like syncthing?

If it were mine, I'd put all the drives in RAID 10, install Proxmox, and either use its containers or create a VM to run docker and give it a big virtual disk.

But the dell controllers aren't very flexible with resizing the RAID. If you want flexibility, consider flashing it to IT mode if possible and then doing zfs, software raid, or LVM groups.

The files will probably be NFS, SMB, or something similar. I have a FreeIPA domain throughout my entire network and this will probably serve as where I put my backups along with whatever other stuff I want. As I intend to expand the cluster, would HBA mode on the H730 be good enough and let ZFS handle it from there?

Google IBM m1015 hba, there's a ton on eBay for no money. It used to be TrueNAS go to. There's newer HBAs that are faster, but I don't think it will matter for you

If you do TN, you MUST read the manual and look at their ZFS intro guide. Trust me.

An H330 came with the server and I bought an H730 with it. I'd prefer to use one of those if possible

Just make sure it's HBA mode and it'll be fine. Sometimes called IT mode.

that's generally what I'm hearing so I think I'll give that a shot. I'll keep the H730 on hand as I want to do some testing with it.

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
NVMe Non-Volatile Memory Express interface for mass storage
RAID Redundant Array of Independent Disks for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

6 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

[Thread #598 for this sub, first seen 12th Mar 2024, 19:15] [FAQ] [Full list] [Contact] [Source code]

I’m not sure why, but it seems like TrueNAS in a VM is not recommended (I saw a thread on their forum)… I also wanted the ability to add more drives later on, so I’ve gone with UnRAID and even though it’s only been a few weeks, it seems pretty functional and I’m glad I paid the licensing fee. I am trying to do Proxmox and OPNsense on one of those fanless N305 boxes and am getting very confused!

Either trueNAS or unraid work as a vm in Proxmox, but there's some caveats. You have to pass the whole hba/lsi pcie device to the vm so you can't split the backplane of the server.

I’m running this way on one of my servers. It’s fine if you pass the entire HBA over (make sure it’s in IT mode for Proxmox).

Alternatively you can map each drive over by disk-by-id mappings as I’m doing on this one. I haven’t dealt with a drive failure yet, but from what I read it’s just a little bit of a headache to re-add the drives later. “Not recommended”, but ok if you know what you’re doing.