Each to their own. Immich devs themselves strongly recommend not relying on Immich as a backup solution.
I don’t, therefore I don’t consider it critical enough to worry about.
Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.
Be a good motherfucker. Peace.
Each to their own. Immich devs themselves strongly recommend not relying on Immich as a backup solution.
I don’t, therefore I don’t consider it critical enough to worry about.
Lol - Immich is one of those stacks that I let Watchtower auto-upgrade. I don’t consider it mission critical if it breaks and it takes me a day or so to notice it (all my photos and videos are also backed up using Syncthing).
I’ve gotten used to just going to the repo if the error message for the container doesn’t immediately lead me to the fix.
Backblaze don’t have a POP in my country, unfortunately.
I use rclone, with encryption, to S3. I have close to 3TB of personal data backed up to S3 this way - photos, videos, paperless-ngx (files and database).
Only readable if you have the passwords configured on my singular backup host (a RasPi), or stored in Bitwarden.
10 (11?). You shall put critical thinking before assumption; empathy before judgment.
Tossing in my vote for Proxmox. I’m running OPNsense as a VM without any issues. I did originally try pfSense, but didn’t like it for some reason (I genuinely can’t recall what it was).
Either way, Proxmox virtual networking has been relatively easy to learn.
OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.
But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).
I’m interested in having a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).
Something I need to ponder…
I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.
I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.
No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.
Might be time to look into Proxmox. There’s a fun weekend project for you!
It’s about fitness for purpose, IMO.
I recently migrated most of my homelab to Proxmox running on a pair of x86 boxes. I did it because I was cutting the streaming cord, and wanted to build a beefy Plex capability for myself. I also wanted to virtualise my router/firewall with OPNsense.
Once I mastered Proxmox, and truly came to appreciate both the clean separation of services and the rapid prototyping capability it gave me, I migrated a lot of my homelab over.
But, I still use RasPis for a few purposes: Frigate server, second Pi-hole instance, backup Wireguard server. I even have one dedicated to hosting temperature sensors, reed switches, and webcams for our pet lizard’s enclosure.
Each has their place for me.
Ah, nice one. Still, a bit annoying that it’s opt out, rather than opt in.
Saved me the effort, thanks. Although, couldn’t you just block the container from talking outside your network? I can’t see why I’d need a memo app (server) to have access to the internet.
Ha! Nice one.
Forked, and mirrored to my Foregjo instance
Ah - I only have the Chromecast GTVs. Good to know I don’t need to pay for an upgrade then!
Lol - not my first rodeo. I’m blocking dns.google as well, and I’m 99.999% certain Google won’t have coded Chromecasts to use anyone else’s DNS servers.
Really? I run several Chromecasts, and I block their access to all DNS services except my internal Pi-holes. They work just fine.
Just the stuff that’s being accessed directly, so if anything’s only going to be accessed via your Traefik server from outside, leave them where they are. That way, any compromise of your Traefik server doesn’t let them move laterally within the same VLAN (your DMZ) to the real host.
If you’re starved for RAM, there’s nothing wrong with a shared instance, as long as you’re aware of the risk of that single instance bringing down multiple services.
I run a three node Proxmox cluster, and two nodes have 80GB RAM each, so my situation is very different to yours. So, I have four Postgres instances: