I went for a much simpler approach lately as I downscaled my hardware for efficiency.
I run NixOS on the bare metal. It gives the system management a declarative approach, just like kubernetes would. On top of that, I run libvirt as a hypervisor. In other scenarios I’d use tinyvmm and cloud-hypervisor, but I found qemu way better for the variety of homelab workloads and libvirt is pretty straightforward.
Some vms have pci passthrough, e.g. my routeros vm gets a bunch of NICs directly, some have various funny network topology. Libvirt used to be a pain in that regard, but it’s actually fine with NixOS because you manage both sides of the networking stack in declarative configuration.
I run NixOS on the vms too (now for the sake of easy upgrades), and I have a bit of a split between running services natively (systemd is very good about “containerizing” things nowadays) and using docker (mostly because of laziness, e.g. Elastiflow was easier to deploy this way). Finally, I have a single dokerized Ubuntu that’s more like a VM (as in, I never had a dockerfile for it, it’s fully stateful) running the matter home automaton bits because I gave up on properly containing the matter python stack and went for an easy way out.
Now, a word about alternatives.
I used to run Ubuntu. No more. Upgrading the OS is always a huge pain even if everything is in docker. I want my OS to be managed in a config file and be able to easily roll back to the previous state.
I used to run k3s, but even though it is much thinner than k8s, it is still very much ram hungry and I just don’t want to pay for that. Besides, complex networking is often non-trivial due to how its networking works, and multus is a world of pain.
I used to run different hypervisors for the VMs (kubevirt, tinyvmm, a bunch others). I went way back to libvirt mostly because it’s straightforward in tuning very specific qemu bits I cared for in the homelab. I have some cpu overprovisioning, so I want to make my quotas set up extremely precisely, sacrificing the right workloads.
I went for a much simpler approach lately as I downscaled my hardware for efficiency.
I run NixOS on the bare metal. It gives the system management a declarative approach, just like kubernetes would. On top of that, I run libvirt as a hypervisor. In other scenarios I’d use tinyvmm and cloud-hypervisor, but I found qemu way better for the variety of homelab workloads and libvirt is pretty straightforward.
Some vms have pci passthrough, e.g. my routeros vm gets a bunch of NICs directly, some have various funny network topology. Libvirt used to be a pain in that regard, but it’s actually fine with NixOS because you manage both sides of the networking stack in declarative configuration.
I run NixOS on the vms too (now for the sake of easy upgrades), and I have a bit of a split between running services natively (systemd is very good about “containerizing” things nowadays) and using docker (mostly because of laziness, e.g. Elastiflow was easier to deploy this way). Finally, I have a single dokerized Ubuntu that’s more like a VM (as in, I never had a dockerfile for it, it’s fully stateful) running the matter home automaton bits because I gave up on properly containing the matter python stack and went for an easy way out.
Now, a word about alternatives.
I used to run Ubuntu. No more. Upgrading the OS is always a huge pain even if everything is in docker. I want my OS to be managed in a config file and be able to easily roll back to the previous state. I used to run k3s, but even though it is much thinner than k8s, it is still very much ram hungry and I just don’t want to pay for that. Besides, complex networking is often non-trivial due to how its networking works, and multus is a world of pain. I used to run different hypervisors for the VMs (kubevirt, tinyvmm, a bunch others). I went way back to libvirt mostly because it’s straightforward in tuning very specific qemu bits I cared for in the homelab. I have some cpu overprovisioning, so I want to make my quotas set up extremely precisely, sacrificing the right workloads.