I’m Hunter Perrin. I’m a software engineer.

I wrote an email service: https://port87.com

I write free software: https://github.com/sciactive

  • 5 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Docker Compose is basically designed to bring up a tech stack on one machine. So rather than having an Apache machine, a MySQL machine, and a Redis machine, you set up a Docker Compose file with all of those services. It’s easier than using individual Docker commands too. It sets up a network so they can all talk to each other, then opens the ports you tell it to. It’s isolated from other Docker Compose networks, so things won’t interfere with each other. So you can basically isolate a bunch of services with their own tech stacks all on the same machine. I’ve got my Jellyfin server running on the same machine as my Mastodon instance, thanks to Docker Compose.

    As long as Docker is configured to run automatically at boot (which it usually is when you install it), it will bring containers back up that are set to be restarted. You can use the “always” or the “unless-stopped” values for the restart option, depending on your needs, then Docker will bring that container back up after a reboot.

    Docker Compose is also useful in this context, because you can define dependencies for services. So I can say that the Mastodon container depends on the Postgres container, and Docker Compose will always start the Postgres container first.


  • One benefit that might be overlooked here is that as long as you don’t use any Docker Volumes (and instead bind mount a local directory) and you’re using Docker Compose, you can migrate a whole service, tech stack and everything, to a new machine super easily. I just did this with a Minecraft server that outgrew the machine it was on. Just tar the whole directory, copy it to the new host, untar, and docker compose up -d.

















  • All of my machines back up to my home server’s RAID over WebDAV with Nephele.

    Then every few days I’ll manually sync them to a server at my parents’ house with a single huge HDD using rsync. I do this manually so that if anything happens to my home server (like ransomware) it doesn’t mirror destroyed data.

    Since the Nephele share is just WebDAV, I can mount it locally and move things into it that I don’t want local anymore.

    I created Nephele, and I just finished writing an encryption plugin. I wrote it because I’m also going to write an S3 adapter. That way, you can store things in S3, but they’ll be encrypted, so Amazon can’t see them.



  • hperrin@lemmy.worldtoSelfhosted@lemmy.worldBest OS for a NAS
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    I’m personally running Ubuntu Server (but I’d recommend Debian), and manage my RAID with mdadm. That way I’ve got just a generic server, and I can install whatever I want.

    Then for my share, I use my own WebDAV server, Nephele:

    https://hub.docker.com/r/sciactive/nephele

    It’s nice because it’s got a browser client that works in basically any browser. I’ve got that running behind Nginx Proxy Manager reverse proxy, so I can have a bunch of services on subdomains running on the same server.

    Samba is faster than WebDAV when you’re dealing with a lot of files, but it doesn’t work with a browser, which is more important for me.