[Question] Migrating and Upgrading Proxmox to New SSDs on Poweredge Safely
With the EOL of PVEv7 and my need for more storage space, I've decided to migrate my VMs to a larger set of drives.
I have PVE installed baremetal on a dell r720 RAID1 SSDs. I'm a bit nervous about the migration.
I plan on swapping the SSDs, installing PVE8 from scratch, then restoring VMs from backup.
Should I encounter an issue, am I able to swap the old RAID1 SSDs back in, or once I configure the new ones are the old drives done for? I'm managing RAID on a dell RAID controller.
I also have my data hard drives passed directly into a TrueNAS VM which supplies other VMs via NFS. Is there anything I should be concerned about when I've migrated, such as errors re-passing the data drives to the TrueNAS VM. Or should everything just work again?
Is there a master PVE config file I can download before swapping drives that I can reference when configuring the new PVE install?
This is a question probably better-suited for one of the Proxmox communities. But, I’ll give it a try.
Regarding your concerns about new SSDs and old VM configs: why not upgrade to PVE8 on the existing hardware? This would seem to mitigate your concerns about PVE8 restoring VMs from a PVE7 system. Still, I wouldn’t expect it to be a problem either way.
Not sure about your TrueNAS question. I wouldn’t expect any issues unless a PVE8 installs brings with it a kernel driver change that is relevant to hardware.
Finally, there are several config files that would be good to capture for backup. Proxmox itself doesn’t have a quick list, but this link has one that looks about right: https://www.hungred.com/how-to/list-of-proxmox-important-configuration-files-directory/
would be a good idea to test the backups on 7, and double check the release notes, they hold just about every caveats. The 7to8 upgrade was not horrible, if you have backups, you could always attempt the upgrade after taking backups, then if successful take new backups, test, then install new drives and restore. depends on how paranoid you feel.
If this was a production system, that is probably the change plan i would follow, but in production I would also be able to migrate VMs. I am not nearly as careful in the home environment.
I did something similar when migrating to 8. Consumer SSDs suck with proxmox so I bought 2 enterprise SSDs on Ebay before the migration and decided to do everything at once. I didn't have all the moving parts you did though. If you have an issue, you will more than likely not be able to pop back in the old SSDs and expect everything to work as normal. I'm not sure what you're using to create backups but if you're not already I would recommend PBS. This way if there is an issue, restoring your VMs is trivial. As long as that PBS is up and running correctly (makes sure to restore a backup before making any changes to make sure it works as intended) it should be ok. I have 2 PBS's. One on and off site.
PBS will keep the correct IPs of your VMs so reconnecting NFS shares shouldn't be an issue either.
I have a remote pbs but the backups aren't current because there was a connection error. I have Proxmox backups locally to a USB thumbdrive. That's what I was going to restore from.
I would say that is not the best way to keep/restore backups as you are missing the integrity checking features of a true backup system. But honestly what really matters is how important the data is to you.
When I say local I mean automated PVE backups the same as it would be through PBS. If that makes any difference.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
[Thread #965 for this sub, first seen 11th Sep 2024, 13:45] [FAQ] [Full list] [Contact] [Source code]
I replaced the drives, installed the newest version of PVE, then restored all of my VMs from local USB backup. I had to reconfigure a number of things such as HDD pass through and other network settings, but in the end the migration was a success.