I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.
I have a similar setup. Even for hard drives and slower SSDs on a NAS, 10g has been beneficial. 2.5 gig would probably be sufficient for most of what I do, but even a few years ago when I bought my used mellanox sfp+ cards on eBay it was basically just as cheap to go full 10g (although 2.5 gig Ethernet ports are a bit more common to find built-in these days, so depending on your hardware, that might be a cheaper place to start). But even from a network congestion standpoint, having my own private link to my NAS is really nice.