Are you using cloudfront?
Are you using cloudfront?
I switched from docker compose to pure Ansible for deploying my containers. Makes managing config and starting containers across multiple hosts super easy. I considered virtualizing too but decided it didn’t offer me enough advantages. If I ever have an issue with the host OS I just reinstall using a preseed file and then rerun my playbooks and it’s ready to go.
I started using Checkmk recently after it was mentioned here and I really like it. I’d used Zabbix a bit but was annoyed at how much work it took to get it to do what I want. Checkmk was a lot better right out of the box.
This is the right answer. A better backup strategy is an actual backup strategy. Snapshots, drive mirroring, rsync copies, etc aren’t really backups.
True. I did some rough math when I needed to right-size a UPS for my home server rack and estimated that running a Pi4 for a year would cost me about $8 worth of electricity and that running an x86 desktop would cost me about $40. Not insignificant for sure if you’re not going to use the extra performance that an x86 PC can offer.
This exactly. If you already have Pis they are still great. Back when they were $35 it was a pretty good value proposition with none of the power or space requirements of a full size x86 PC. But for $80-$100 it’s really only worth it if you actually need something small, or if you plan to actually use the gpio pins for a project.
If you’re just hosting software a several year old used desktop will outperform it significantly and cost about the same.
I really like Kopia. I backup my containers with it, my workstations, and replicate to s3 nightly. It’s great.
I’ve had a lot of good luck with Syncthing. If you’re just syncing files locally you can disable nat traversal.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
Once I got to the point where I was running a ton of containers I’d occasionally have issues where a maintainer wouldn’t resolve issues fast enough for my liking so I started building more containers myself which was a lot easier than I’d anticipated.
People managing their own users is super nice too. Easy self service password resets without me having to do it for them.
I’ll second this recommendation and add a bit more. I’d recommend using DuckDNS’s dynamic DNS service. It’s free (donate if you can!) and fairly simple to set up. I run it on my router since it supports it but it’s easy to run in a docker container too.
Check to see how many SATA power connections are available. My Optiplex 3620 only has four and I wasn’t confident that I could expand that without overwhelming the PSU. I bought a 24 to 8 pin adapter so I could use a normal non-proprietary power supply.
Consider those costs if you’re thinking you might install a lot of drives.
You have to have one of their proprietary boxes hosting the protect software
This was the biggest bummer for me that convinced me not to get into the Unify ecosystem. I already have a robust storage solution at home, I just want to point the cameras to a docker container running on my host with all the storage.
Sure. Below is an example playbook that is fairly similar to how I’m deploying most of my containers.
This example creates a folder for samba data, creates a config file from a template and then runs the samba container. It even has a handler so that if I make changes to the config file template it will cycle the container for me after deploying the updated config file.
I usually structure everything as an ansible role which just splits up this sort of playbook into a folder structure instead. ChatGPT did a great job of helping me figure out where to put files and generally just sped up the process of me creating tasks to do common things like setup a cronjob, install a package, or copy files around.
- name: Run samba
hosts: servername
vars:
samba_data_directory: "/home/me/docker/samba"
tasks:
- name: Create samba data directory
ansible.builtin.file:
path: "{{ samba_data_directory }}"
state: directory
mode: '0755'
- name: Create samba config from a jinja template file
ansible.builtin.template:
src: templates/smb.conf.j2
dest: "{{ samba_data_directory }}/smb.conf"
mode: '0644'
notify: Restart samba container
- name: Run samba container
community.docker.docker_container:
name: samba
image: dperson/samba
ports:
- 445:445
volumes:
- "{{ samba_data_directory}}:/etc/samba/"
- "/home/me/samba_share:/samba_share"
env:
TZ: "America/Chicago"
UID: '1000'
GUID: '1000'
USER: "me;mysambapassword"
WORKERGROUP: "my-samba-workergroup"
restart_policy: unless-stopped
handlers:
- name: Restart samba container
community.docker.docker_container:
name: samba
restart: true
I moved from compose to using Ansible to deploy containers. The Ansible container config looks almost identical to a compose file but I can also create folders, config files, set permissions, etc.
I’ve been slowly moving all my containers from compose to pure Ansible instead. Makes it easier to also manage creating config files, setting permissions, cycling containers after updating files etc.
I still have a few things in compose though and I use Ansible to copy updates to the target server. Secrets are encrypted with Ansible vault.
Standing up email might not be that hard… but it’s much harder to ensure that your mail will actually be delivered successfully. Plus it’s not a service you can typically afford to go down. Any emails you miss during that downtime are gone forever, whereas even if my Vaultwarden credential vault goes down I can access passwords from a device that has things cached at least while I fix things.
Plus the big providers just treat small mail servers with a lot more skepticism than they did 20 years ago.
Are you using s3 for storage or block storage? S3 is pretty cheap but I’m wondering if Cloudfront would still help me with the load on the ec2 instance when federation traffic is slamming it.