I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • SpaceCadet@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    Huh? Your docker container shouldn’t be calling pip for updates at runtime, you should consider a container immutable and ephemeral. Stop thinking about it as a mini VM. Build your container (presumably pip-ing in all the libraries you require) on the machine with full network access, then export or publish the container image and run it on the machine with limited access. If you want updates, you regularly rebuild the container image and repeat.

    Alternatively, even at build time it’s fairly easy to use a proxy with docker, unless you have some weird proxy configuration. I use it here so that updates get pulled from a local caching proxy, reducing my internet traffic and making rebuilds quicker.

    • DontNoodles@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      I think I’m not aware of the exporting/publishing part and that’s the cause of my woes. I get everything running on the machine with unrestricted access, move to the machine with restricted access go “docker compose up” and get stuck. I’ll read up on exporting/publishing, thank you.