• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • Raid0 (combining both drives’ capacities) is not really tiered storage. You would want Raid1 (each drive is a copy of the other drive ), but doing this isn’t a backup. How will you be monitoring the drives so that you know if one of them actually fails?

    I don’t think the RPi has a new enough kernel, but with bcachefs you can do tiered storage. By combining the storage of the ssd + hardrives, into a single block device, then make the ssd the read/write cache, and give the whole pool replicas=2, so that that if one drive dies you still have the failover of the other drive. Do be aware this setup is still not a backup however.


  • It does make sense. Thank you. I appreciate the link!

    However, my cloud usage is purely as a proxy/load balancer, as none of my cloud providers hold any actual data. They’re just routing traffic, and all data/processing is on premises. What I’m interested in, is how to setup something like what you describe, but on premises also. From a design stand point, if I wanted to protect myself from a ransomware attack, obviously my cloud backups would be lost because they’re a mounted filesystem during a backup eventually. So I don’t know how to wrap my head around handling this, just storage design wise as specific tools I can figure out. How does one create a recovery point, and keep it safe from something like this? Just image the entire file system from a live booted offline environment? Feels like a chicken-egg problem to me.


  • I’ve thought about how I could handle disaster recovery for my homelab environment, but I haven’t come to any good solutions. For example, if my main concern was being hit by crypto. I can’t just recover from a regular backup, since I’m not sure how I can make a backup without that backup just being encrypted along side everything else. Since I mainly just backup everything to my file server, which is then synced to the cloud. In that setup, my cloud backups would be lost as well.

    Would you have some starting points on how others handle disaster recovery? I’d like to avoid manually making an offline backup, because inevitably I’d forget to do it, which would make it useless anyway.










  • Probably not the ‘recommended’ way, but I use a selfsigned cert for each service I’m running generated dynamically on each run with nginx as a reverse proxy. Then I use HAproxy and DNS SRV records to connect to each of those services. HAproxy uses a wildcard cert (*.domain.tld) for the real domain and uses host mapping for each subdomain, (service1.domain.TLD).

    This way every service has its traffic encrypted between the HAproxy and the actual service, then the traffic is encrypted with a browser valid cert on the frontend. This way I only need to actually manage 1 cert. The HAproxy one. Its worked great for me for a couple of years now.

    Edit: I’m running this setup for about 50 services, but mostly accessed over LAN/VPN.


  • The main thing for this is cost. I don’t really know what performance specs for a VPS I would need to reasonably have good network performance with ~10 devices, though I’m guessing I’ll have to have something =<10Gbsp. So maybe $25-$30/month depending on who I buy a VPS through?

    Would EACH of your devices have their own dedicated gigabit connection to your server? Even so, are you the only user or is this for some family members also? If its just you, you can 9/10 just get a basic 5$ or less gigabit VPS. You’d much more often be limited by your outbound connection than your VPS networking, by a considerable margin. Most things you are connecting to won’t saturate even a gigabit connection, so you’d be well under your bandwidth requirements.