There are sane people with this many VMs on a personal machine, right? RIGHT?

data1701d (He/Him)@startrek.website to Linux@lemmy.ml – 148 points –

Half of these exist because I was bored once.

The Windows 10 and MacOS ones are GPU passthrough enabled and what I occasionally use if I have to use a Windows or Mac application. Windows 7 is also GPU enabled, but is more a nostalgia thing than anything.

I think my PopOS VM was originally installed for fun, but I used it along with my Arch Linux, Debian 12 and Testing (I run Testing on host, but I wanted a fresh environment and was too lazy to spin up a Docker or chroot), Ubuntu 23.10 and Fedora to test various software builds and bugs, as I don't like touching normal Ubuntu unless I must.

The Windows Server 2022 one is one I recently spun up to mess with Windows Docker Containers (I have to port an app to Windows, and was looking at that for CI). That all become moot when I found out Github's CI doesn't support Windows Docker containers despite supporting Windows runners (The organization I'm doing it for uses Github, so I have to use it).

74

I guess you should use proxmox at this point 🤣

There are many many many insane people who are running no virtual machines at all.

With that many Windows (gasp) ones, no... I'm afraid you are not

Hell to update them regularly 👀

Nah, most of the windows ones don’t get updates any more and the Linux ones can get a script that updates on boot. Takes longer to start up but handles the job itself.

Yes, but usually they'd have a more robust VM management system to stay sane for long.

I have about twice this many VMs and about this many running at any given time.

I use Qubes btw

What do you use it for? How's the daily-driver experience?

Its my only computer. I couldn't go back to anything else. Every time I double click Firefox, it opens a new VM. When I close Firefox, the VM is destroyed.

Email is in a separate VM. Email attachments also open in a disposable VM. USB devices are quarantined unless I connect them to a specific VM. Its a game changer.

Cons: I need as much ram as I used to need when I ran Windows. Watching videos is a bit choppy at full screen sometimes. And I can't play any video games.

Sounds like some pretty serious cons

Out of curiosity why do you like qubes? Having everything in a VM doesn't sound that great to me

I get that the main concern of it is security but what do you do that it demands that level of hardening? I've only ever got one virus in my life that I know of as it is and that was on windows

Lol wut? Those pros far outweigh the cons. But I guess I don't care about video games?

I have money on my computer, and I have a company that has customer info. That's enough of a reason for me to want to protect my shit better than running one big, super-vulnerable system

Not op but I do a lot of architecture and infrastructure work on top of my normal dev work so keeping everything separated and per-client has become a pretty important advantage for me personally

Yeah I also consult with many different clients. Sometimes those clients need me to install sketch software. Thank god I can do this in a silo in Qubes, or it could endanger my other clients.

Fwiw I had to tinker a bit to get good video playback, Fedora was always choppy for me for some reason but debian is typically smooth with hw accel disabled.

As for the gaming, depending on your setup (I have a desktop and T480 I keep in sync) you can absolutely run two video cards and do PCI passthrough on one to a gaming VM. I have mine set up with a dedicated NIC and USB card and just use a KVM to swap between Qubes and Windows (for now) and it's worked really well. Had to play around a ton to get the full speed out of the GPU though and it only seemed to work in windows so hopefully get that going for a Linux hvm one day.

Absolutely agree there is no going back, I have all of my work stuff entirely hardware agnostic and a full on replica of my work desktop ready to go in a moment should the desktop die. Apart from that keeping client work isolated has been such a game changer.

I use Debian. Like I said, video is only sometimes choppy. I usually have a few vlc windows open at one time. Something I've learned is that it will use a lot of CPU even if the video is paused. To stop it, I have to manually set the video source to "none" when I pause a video and leave it in the BG.

Or just pause the whole VM. Another great Qubes feature

Something I've learned is that it will use a lot of CPU even if the video is paused.

this has been my experience with it on windows too, so it must be a core VLC thing. if it bothers you, I recommend you to try out MPV. been using it for more than a year, would never go back. If you need more than the on screen controller and key combos, there are quite a few proper GUI players being built on MPV.

It's only insane if you have them all running at once.

I mean, people collect all sorts of weird shit

The biggest reason why I don't want maintain so many Vms is, because all the maintenance and updates that involve doing so.

And that's why there's a "-2" on the end of that arch vm - there was one before that I borked while trying to update it because I hadn't used it in so long.

How much disk space have you got??

It's a terabyte SSD. I've currently got 136 GB left on it. I think part of it might be they're auto-expanding qcow2 images, so they don't actually take up the full space provisioned for them.

Not VMs but I have way more docker containers. I run most things as containers which keeps the base OS nice and clean and free from dependency hell.

I have probably a couple of more Linux/BSD VMs than here (with some with GPU passthrough and one or two for ARM crossbuilding and so on) but only 2 Windows VMs - the only 2 I have legitimate licenses for.

But am I normal? Most would disagree. 😅

10, plain 11, 7, and funny enough, Server 2022 are all legit licenses (I can get a key for server through my university). Actually, I'm pretty sure the 11 one, I upgraded a Windows 7 VM to 10, then to 11.

Every other Windows version that needs it (11 LTSC, 8.1, and Vista), I just temporarily host a phony KMS server whenever it needs to be reactivated.

I apologize for talking so much about Windows on a Linux sub. May Stallman break into my house and give me 10 lashes as I slumber.

The Windows XP and Windows 7 I have are also from my university, from a long long time ago.😃

I always remove any virtual machines every time I'm done with it and reinstall if I need to use it again

Yeah.

My home server runs that many, but it's a monster dual xeon.

The freebsd instances have a ton of jails, the Linux vms have a ton of lxc and docker containers.

It's how you run many services without losing your mind.

I do have as many too at work.

I use one VM for each iteration of my automation software. Our factory has machines ranging from the 90s to present day, and they use different software environments to be programmed. In order to minimize the risk of data loss, we have one virtual machine with every software environment, that way if one gets corrupted, the damage is contained. It also makes them easier to export to new computers when we need to replace ours.

I've had physical esx servers running this many VMS simultaneously, and I can totally see why a hobbiest or dev would have a need for this many VMs on standby. You are sane, yes

I run a different LXC on Proxmox for every service, so it's a bunch. Probably a better way to do it since most of those just run a docker container inside them.

I wouldn't call that terribly efficient.

I would do 2-3 VMs with docker and maybe a network share

Why mix docker and VMs? Isn't docker sort of like a VM, an application-level VM maybe? (I obviously do not understand Docker well)

I like to run a hypervisor host as just that, a hypervisor host. The host being stable is important, and also reduce attack surface by only having it as that.

An LXC per service is somewhat overkill. A docker host running on LXC could likely run all the docker containers.

I mentioned above, and not to spam, but there might be a use case that requires a different host distribution. Networking isolation might be another reason why. For 90% of use cases, you’re correct.

Serious answer, I'm not sure why someone would run a VM to run just a container inside the VM, aside from the VM providing volumes (directories) to the VM. That said, VMs are perfectly capable of running containers, and can run multiple containers without issue. For work, our Gitlab instance has runners that are VMs that just run containers.

Fun answer, have you heard of Docker in Docker?

LXC is much more light weight than VMs, so it's not as much overhead. I've done it this way in case I need to reboot a container (or something goes wrong with an update) without disrupting the other services

Also keeps it consistent since I have some services that don't run in docker. One service per LXC

I have a real use case! I have a commercial server software that can run on Ubuntu or RHEL compatible distributions. My entire environment is Ubuntu. They also allow the server software to run in a docker container but the container must be running RHEL. Furthermore, their license terms require me to build the docker container myself to accept the EULA and the docker image must be built on RHEL! So I have an LXC container running Rocky Linux that gets docker installed for the purpose of building RHEL (Core is 8) imaged docker containers. It’s a total mess but it works! You must configure nested security because this doesn’t work by default.

Instructions here: https://ubuntu.com/tutorials/how-to-run-docker-inside-lxd-containers#1-overview

On the joke, define "sane". 😬

On a serious note, I think there are valid reasons to have several VMs other than "I was bored". In my case, for example, I have a total of 7 VMs, where 2 are miscellaneous systems to test things out, 2 are for stuff that I can't normally run on Linux, 2 are offline VMs for language dictionaries, and 1 is a BlissOS VM with Google programs in case I can't/don't want to use my phone.

Looks normal for testing stuff. I have 5ish in my desktop hypervisor.

Interesting enough, there is a project that I've found that runs Windows in a Docker container as a VM.

https://github.com/dockur/windows

I run a Windows 10 LTSC that way to run things like Blue Iris for my security cameras, and some stuff to track my solar installation.

GPU passthrough has always been one of those exciting ideas I’d love to dive into one day. My current GPU being a little older, has only 4GB of RAM. Oh the joy's of being a budget PC user. Thankfully it's more of a "would be nice rather" than an "actually need"....

Very few people need it but it’s awesome and a lot of fun and lets you spend more time in Linux than dealing with Windows. The VFIO Reddit and Arch wiki are great resources. I have GPU, USB, and Ethernet pass through on my Ubuntu machine and it works great, but I needed the Arch wiki to really figure out what I was doing wrong when I first set it up. Level1Techs is also a good resource on YouTube and forums because they are big into VFIO and SR-IOV. Next time you get a PC, make sure to look for more PCI lanes and bifurcation support on your motherboard. Gen 4 is a great option because it generally has enough lanes and the ram and ssd are much cheaper than Gen 5. GPU choice doesnt matter much but if you’ve got AMD watch out for the reset bug. Basically you can start a VM but once you quit it the cards state is unavailable for further use (eg a second VM session or reopening your DE if you’re using a single GPU setup) unless you restart your host. There are some workarounds but personally I’d avoid it if possible. Onboard graphics (iris or amd APU) are recommended. Older hardware can get cheap so good luck saving up if this is something you want to do!

I did this with Qubes a year ago and haven't had any issues apart from figuring out the right flags to get the full performance, otherwise the GPU would cap around 30% under load with low CPU load.

Kind of at the mercy of what your motherboard and bios will allow, mine I had to cheese a little and disable the PCI device on boot so I get to decrypt my disk with no screen lol but it works!

My motherboard is a stock dell from around 2012 so I doubt performance would be at all good. Thats even if it worked in the first place....

I have about that many. Looks good to me! I have two Windows VMs. One for work and presentations. One for games and Adobe. A bunch of random Linux VMs trying to get a FireWire card to work and a Windows 7 VM for the same reason. I’ve also for several Linux VMs trying out new versions of Fedora, Ubuntu, or Debian. A couple servers. Almost none of them are ever turned on because my real virtualized workloads run in docker or LXC! I never could get Mac VM to work but I have an AMD CPU and a MacBook so not too high priority.

Screen sharing from Linux is amusing though, so far I've yet to have anyone even mention it (hyprland so looks very different to Windows

If I could get vbox to work* on my laptop or find the drive to learn QEMU, then I would have plenty on there. For now I'm just stuck with plenty on my desktop running win10.

*I have installed it a few times on my Debian based distro, but I swear every time I do nothing to it and it destroys itself. Works fine one day, then the next I turn on my laptop, after the only changes being that I created and ran a VM and it decided to hate me and not even boot the program. I think I'm just cursed.

What about Virt Manager GUI, which is what I use here? It's a frontend for QEMU and it's not that difficult, honestly.

Well I do but I have a machine with 3/4 of a terabyte of memory on it.

Work scraps are great sometimes.

How are you running the MacOS VMs. The machine I have is a cheese grater so that makes it easier.

Are you running macOS or Linux as your host? My MacBook is M1 and I found the performance running ARM windows and ARM Fedora via UTM (qemu) to be pretty good.

On the cheesegrater(2019 MacPro) it’s a little convoluted. During covid times it was my single box lab since it had so much memory (768TB). So I was running nested ESXI hosts and then VMs under that. I also have a M1 MacBook Pro that I had parallels run ARM VMs (mostly MacOS, Windows, and a couple of Debian installs I think).

I have been looking at VMWare alternatives at work so for the hypervisors I’ve been playing around.

I do this stuff for a living but I also do it home for fun and profit. Ok not so much profit. Ok no profit but definitely for the fun. And because I love large electric bills.

That’s a beast of a Mac. Wake on lan is your friend. I have the same problem with my Threadripper. I wrote a script that issues a WOL command to either start/unsuspend my Ubuntu machine so I can turn it off when not in use. It’s probably $70/month difference for me. Most of my virtualization is on Linux but I’ve moved away from VM Ware because QEMU/KVM has worked so well for me. You should check out UTM on the Mac App Store and see if that solves any of your problems.

ETA: https://mac.getutm.app/

Man this thread has taught me all sorts of things. I will definitely check out UTM. Thanks for that!!

I found a prebuilt OpenCore for KVM. https://github.com/thenickdude/KVM-Opencore

I then changed the config.plist to make it think it was a 2019 Mac Pro.

Ok I’ll have to try this. The weird thing is my little test proxmox server is a 2013 trashcan. So this would be like a hackintosh running on Mac hardware. Would that technically be a hackintosh? I’m not really sure. According to the Apple license you can virtualize MacOS if it’s running on Mac hardware. I’m not sure if that requires MacOS as the hypervisor. Regardless this is not something I knew about. Very cool. Thanks for the info.

Bahah i have like 7 but im concerned by the fact i probably forgot the password to half of em xD

For windows I either use a mingw toolchain from mxe.cc or just run the msvc compiler in wine, works great for standard C and C++ at least, even when you use Qt or other third party libraries.