I do it that way. Enable email notifications for new tagged releases, something arrives, check changelog, everything fine?
docker-compose pull; docker-compose down; docker-compose up -d
And we are done
I like sysadmin, scripting, manga and football.
I do it that way. Enable email notifications for new tagged releases, something arrives, check changelog, everything fine?
docker-compose pull; docker-compose down; docker-compose up -d
And we are done
You could not use the dns challenge?
I have no experience outside of blocky, but the configuration file is so damm simple and clean I have troubles even considering anything else.
Daily, usually keeping only the last week or so
If you keep the same filenames for the video files it should not redownload what already has.
For automatically I think is honestly easier to just run the command on a cronjob every 5 mins.
Just buy bigger disks 🫢
Isnt’t port 81 where usually the nginx proxy manager webui is served? I think you should just forward the requests directly to port 80 and 443 respectively.
you have been banned from /c/DataHoarder
Does it run multiple processes inside the container? Looks like the entrypoint only launchs one.
This looks nice, even has a clean docker image.
Will check it out. Setting up postfix + dovecot with dmarc and postgres was a funny experience but it’s starting to slip out of my memory how I did it and I don’t want to be through it again.
Try the pi for tinkering since it will be cheaper. If you end seeing issues with performance for the usage you need you could start looking up used laptops or optiplexes.
I had some used componentes lying around so I frankesteined a server with used parts after buying some disks
Pretty sure you configure everything on the entrypoint, for the services runnin in your home machine it should be transparent
I remember having to enable forwarding of the initial packet when I used to forward a webserver
iptables -A FORWARD -i eth0 -o wg0 -p tcp --syn --dport 80 -m conntrack --ctstate NEW -j ACCEPT
My docker containers are all configured via docker compose so I just tar the .yml files and the outside data volumes and backup that to an external drive.
For configs living in /etc you can also backup all of them but I guess its harder to remember what you modified and where so this is why you document your setup step by step.
Something nice and easy I use for personal documentations is mdbooks.
Idk what’s offsite to you but if it’s a controlled place for you (like from a friend or family member) you could simply bring your device one day and do the first copy there.
Otherwise maybe rsync a folder at a time.
It has an UPS builtin 😇
Jokes aside I used to run a few python bots inside termux on my very old S3 Mini a few years ago. It did the job at least.
Termux has nginx, postgres, python and plenty of stuff compiled to ARM so I bet you can. You would have to be wary of non standard ports unless you have root access and make sure android does not kill or puts to sleep termux by adding exceptions to the app.
I remember running a few low traffic Mastodon bots in a S3 Mini years ago and it was decent.
Run docker ps
and check what port the container claims to have been mapped on the host.
I guess that’s fair for single service composes but I don’t really trust composes with multiple services to gracefully handle only recreating one of the containers