There are devices like the Netgear lm1200 that can do it inline by themselves.
I have that device, but configured as a second gateway. My firewall manages the failover based on primary packet loss and latency.
There are devices like the Netgear lm1200 that can do it inline by themselves.
I have that device, but configured as a second gateway. My firewall manages the failover based on primary packet loss and latency.
I run nut on a pi.
In addition to ups, an LTE failover. I’ve had my Comcast crap be offline for hours.
Borg. With rsync.net if you want to keep an off-site.
Naemon and Graylog.
Roundcube
I’d still use a nas for storage and another system for VMs. Unless you want to make your VM server have an array itself, but then you have to mange that on the same server.
It works the same either way. Borg does a lot of different backups on my home network. I also have more than just Borg backups that I want off-site, so an rclone of everything from that nas share once after everything else is done makes more sense than duplicating Borg everywhere. The rclone’d stuff can be used directly just like if it was put there by Borg itself.
That is rsync.net’s entire business model.
I still rclone my Borg repos there instead of relying on snapshots though.
I concur with most of your points. Docker is a nice thing for some use cases, but if I can easily use a package or set up my own configurations, then I will do that instead of use a docker container every time. My main issues with docker:
Other than waf stuff, if you have multiple servers behind a small nat, a reverse proxy can service them all from a single exposed public address. You can also do rewrite rules on the proxy vs on each server.
https://radicale.org/