I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    11 months ago

    Hi, also used to be a sysadmin and I like things that are simple and work. I like Docker.

    Besides what you already noticed (that most software can be found packaged for Docker) here are some other advantages:

    • It’s much lighter on resources and efficient than virtual machines.
    • It provides a way to automate installs (docker compose) that’s (much) easier to get started with than things like Ansible.
    • It provides a clear separation between configuration, runtime, and persistent data and forces you to get organized.
    • You can group related services.
    • You can control interdependencies, privileges, shared access to resources etc.
    • You can define simple or complex virtual networking topologies between containers as you like.
    • It adds extra security (for whatever that’s worth to you).

    A brief description of my own setup, for ideas, feel free to ask questions:

    • Router running OpenWRT + server in a regular PC.
    • Server is 32 MB of RAM (bit overkill for now, black Friday upgrade, ran with 4 GB for years), Intel CPU with embedded GPU, OS on M.2 SSD, 8 HDD bays in Linux software RAID (MD).
    • OS is Debian stable barebones, only Docker, SSH and NFS are installed on the host directly. Tip: use whatever Linux distro you know and like best.
    • Docker is installed from their own repository, not from Debian’s.
    • Everything else runs from docker containers, including things like CUPS or Samba.
    • I define all containers with compose, and map all persistent data to host storage. This way if I lose a container or even the whole OS I just re-provision from compose definitions and pick up right where I left off. In fact destroying and recreating containers cleanly is common practice with docker.

    Learning docker and compose is not very hard esp. if you were on the job.

    If you have specific requirements eg. storage, exposing services over internet etc. please ask.

    Note: don’t start with Podman or rootless Docker, start with regular Docker. It will be 10x easier. You can transition to the others later if you want.

    • RubberElectrons@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      11 months ago

      I’m basically the same here, used to be a sysmin too. Docker compose is running a couple of complicated inter-dependent services at my job as a first try for me, it’s been quite stable and clear on what’s happening within the containers.

      I really like how the docker setup files also become a source of truth documentation wise, particularly when paired with git.

      P.s. I know it’s a typo, but imagine a ‘black Friday upgrade’ for your server being a move from 4gb ram to 32mb. Return to monke 1998.

    • jodanlime@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      As someone who just started their container adventure by setting up rootless podman on arch, it wasn’t terrible but I think I agree. I think I’m going to go check out some vanilla-ass docker until I can understand everything better.