Mental note: have to migrate my gitea instance over to forgejo.
Mental note: have to migrate my gitea instance over to forgejo.
The encryption i was talking about is the encryption of your dns server
You mean encryption between the client and your DNS server, on your local network?
Just wanted to chime in and say that with a pihole you can also have encryption if you point to a local resolver like cloudflared
or unbound
.
My pihole forwards everything to a cloudflared
service running on 127.0.0.1:5353 to encrypt all my outgoing DNS queries, it was really easy to setup: https://docs.pi-hole.net/guides/dns/cloudflared/
DNS-over-HTTPS
You can also do that with running cloudflared or unbound on your pihole.
For me gravity sync was too heavy and cumbersome. It always failed at copying over the gravity sqlite3 db file consistently because of my slow rpi2 and sd card, a known issue apparently.
I wrote my own script to keep the most important things for me in sync: the DHCP leases, DHCP reservations and local DNS records and CNAMES. It’s basically just rsync-ing a couple of files. As for the blocklists: I just manually keep them the same on both piholes, but that’s not a big deal because it’s mostly static information. My major concern was the pihole bringing DHCP and DNS resolution down on my network if it should fail.
Now with keepalived and my sync script that I run hourly, I can just reboot or temporarily shutdown pihole1 and then pihole2 automatically takes over DNS duties until pihole1 is back. DHCP failover still has to be done manually, but it’s just a matter of ticking the box to enable the server on pihole2, and all the leases and reservations will be carried over.
That’s what I do. I do have a small VM that is linked to it in a keepalived cluster with a synchronized configuration that can takeover in case the rpi croaks or in case of a reboot, so that my network doesn’t completely die when the rpi is temporarily offline. A lot of services depend on proper DNS resolution being available.
You can use log2ram to mitigate that.
Alternatively, you can even boot a root filesystem residing on an NFS share, but in the case of a rpi hosting the network’s DNS and DHCP services, you could end up with a chicken and egg problem.
I think it’s a good tool to have on your toolbelt, so it can’t hurt to look into it.
Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).
Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.
Huh? Your docker container shouldn’t be calling pip for updates at runtime, you should consider a container immutable and ephemeral. Stop thinking about it as a mini VM. Build your container (presumably pip-ing in all the libraries you require) on the machine with full network access, then export or publish the container image and run it on the machine with limited access. If you want updates, you regularly rebuild the container image and repeat.
Alternatively, even at build time it’s fairly easy to use a proxy with docker, unless you have some weird proxy configuration. I use it here so that updates get pulled from a local caching proxy, reducing my internet traffic and making rebuilds quicker.
postgres
I never use it for databases. I find I don’t gain much from containerizing it, because the interesting and difficult bits of customizing and tayloring a database to your needs are on the data file system or in kernel parameters, not in the database binaries themselves. On most distributions it’s trivial to install the binaries for postgres/mariadb or whatnot.
Databases are usually fairly resource intensive too, so you’d want a separate VM for it anyway.
what would I gain from docker or other containers?
Reproducability.
Once you’ve built the Dockerfile or compose file for your container, it’s trivial to spin it up on another machine later. It’s no longer bound to the specific VM and OS configuration you’ve built your service on top of and you can easily migrate containers or move them around.
VM with a docker build environment.
As for “littering”, a simple docker system prune -f
after a build gets rid of most of it.
Yes, that’s essentially what I did.
You’re welcome, cunt
Using double NAT here because my ISP won’t even support/allow putting their box in bridge mode and I don’t even have root access to it, just some limited functionality via their web GUI.
I haven’t had any issues with it.
FWIW I use an elastic stack for that: filebeat, journalbeat to collect logs. Logstash to sort and parse them. Elasticsearch to store them. Not sure if it satisfies your FOSS requirement, as I don’t believe it’s entirely open source.
No, that just creates time outs and delays when either of them is offline.
The proper way is to have a standby pihole that takes over the IP address of the main pihole when it goes down. It’s quite easy to achieve this with keepalived.