• 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle






  • In response to your update: Try specifying the user that’s supposed to own the mapped directories in the docker compose file. Then make sure the UID and GID you use match an existing user on the new system you are testing the backup on.

    First you need to get the id of the user you want to run the container as. For a user called foo, run id foo. Note down the UID and GID.

    Then in your compose file, modify the db_recipes service definition and set the UID and GID of the user that should own the mapped volumes:

      db_recipes:
        restart: always
        image: postgres:15-alpine
        user: "1000:1000" #Replace this with the corresponding UID and GID of your user
        volumes:
          - ./postgresql:/var/lib/postgresql/data
        env_file:
          - ./.env
    

    Recreate the container using docker compose up -d (don’t just restart it; you need to load the new config from the docker compose file). Then inspect the postgresql directory using ls -l to check whether it’s actually owned by user with UID 1000 and group with GID 1000. This should solve the issue you are having with that backup program you’re using. It’s probably unable to copy that particular directory because it’s owned by root:root and you’re not running it as root (don’t do that; it would circumvent the real problem rather than help you address it).

    Now, when it comes to copying this to another machine, as already mentioned you could use something that preserves permissions like rsync, but for learning purposes I’d intentionally do it manually as you did before to potentially mess things up. On the new machine, repeat this process. First find the UID and GID of the current non-root user (or whatever user you want to run your containers as). Then make sure that UID and GID are set in the compose files. Then inspect the directories to make sure they have the correct ownership. If the compose file isn’t honoring the user flag or if the ownership doesn’t match the UID and GID you set for whatever reason, you can also use chown -R UID:GID ./postgresql to change ownership (replace UID:GID with the actual IDs), but that might get overwritten if you don’t properly specify it in the compose file as well, so only do it for testing purposes.

    Edit: I also highly recommend using CLIs (terminal) instead of the GUI for this sort of thing. In my experience, the GUIs aren’t always designed to give you all the information you need and can actually make things more difficult for you.


  • As others have already mentioned, you are probably correct that it’s a permission error. You could follow the already posted advice to use tools that maintain permissions like rsync, but fixing this botched backup manually could help you learn how to deal with permissions and that’s a rather fundamental concept that anyone selfhosting would benefit from understanding.

    If you decide to do this, I would recommend reading up on the concept of user and group permissions on linux and the commands that allow you to inspect ownership and permissions of directories and files as well as the UID and GID of users. Next step would be to understand how Docker handles permissions for mapped directories. You can get a few pointers from this short explanation by LSIO: https://docs.linuxserver.io/general/understanding-puid-and-pgid. Bear in mind that this is not a Docker standard, but something specific to LSIO Docker images. See also https://docs.docker.com/compose/compose-file/05-services/#long-syntax. This can also be set when using docker run by using the --user flag.

    Logs can also help pinpoint the cause of the issue. The default docker compose setup in Tandoor’s docs sets up several containers, one of which acts as a database (db_recipes based on postgres:15-alpine). Inspect that in real time using docker logs -f db_recipes to see the exact errors.




  • That log entry is unrelated to whatever issues you’re having. That’s what the default docker-compose.yaml uses for health checks:

      healthcheck:
          test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
          interval: 30s
          timeout: 5s
          retries: 2
    

    The fact that it returns a 200 probably means that Invidious is properly up and running. Could you elaborate further on what you mean by “setup isn’t completing”? How are you trying to connect to the web UI? Sharing your docker-compose.yaml might help us debug as well.

    Edit: I just noticed that the default compose file has the port bound to localhost:

        ports:
          - "127.0.0.1:3000:3000"
    

    which means you won’t be able to access it from other machines inside or outside your network. You’d have to change that to - "3000:3000" to enable access for other machines.


  • I’ve never heard of NextCloud Cookbook before. Looking at its Github page, it says it’s “mostly for testers” and is unstable, so no point in even considering it for regular use at this point in time. Besides, I’m assuming you’d need to have your own instance of Nextcloud up and running to use it; I don’t use Nextcloud.

    As for Grocy and other more mature alternatives (Tandoori also comes to mind), I think I initially went with Mealie because it had the most pleasant UI out of all of them. I liked it and found that it satisfied all of my requirements, so I just kept using it.