• 1 Post
  • 23 Comments
Joined 11 months ago
cake
Cake day: August 10th, 2023

help-circle
  • Dockers manipulation of nftables is pretty well defined in their documentation

    Documentation people don’t read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

    As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

    Too bad people don’t read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver’s webtop, because it’s an example of the two of the worst footguns in docker in one

    To include the rest of my comment that I linked to:

    Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

    No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

    On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

    You originally stated:

    I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

    And I’m trying to say that even if that was true, it would still be better than a footgun where people expose stuff that’s not supposed to be exposed.

    But that isn’t the case for podman. A quick look through the github issues for podman, and I don’t see it inundated with newbies asking “how to expose services?” because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

    (I don’t have anything against you, I just really hate the way docker does things.)



  • Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

    I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

    My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn’t be exposed.

    Excerpt from another comment of mine:

    It’s only docker where you have to deal with something like this:

    ---
    services:
      webtop:
        image: lscr.io/linuxserver/webtop:latest
        container_name: webtop
        security_opt:
          - seccomp:unconfined #optional
        environment:
          - PUID=1000
          - PGID=1000
          - TZ=Etc/UTC
          - SUBFOLDER=/ #optional
          - TITLE=Webtop #optional
        volumes:
          - /path/to/data:/config
          - /var/run/docker.sock:/var/run/docker.sock #optional
        ports:
          - 3000:3000
          - 3001:3001
        restart: unless-stopped
    

    Originally from here, edited for brevity.

    Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.






  • Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.

    Those issues are only common on rolling releases. On stable distros, they put tape between breaking changes, test that tape, and then roll out updates.

    Debian, and many other distros support it officially: https://wiki.debian.org/UnattendedUpgrades. It’s not just a cronjob running “apt install”, but an actual process, including automated checks. You can configure it to not upgrade specific packages, or stick to security updates.

    As for containers, it is trivial to rollback versions, which is why unattended upgrades are ok. Although, if data or configuration is corrupted by a bug, then you probably would have to restore from backup (probably something I should have suggested in my initial reply).

    It should be noted that unattended upgrade doesn’t always mean “upgrade to the latest version”. For docker/podman containers, you can pin them to a stable release, and then it will do unattended upgrades within that release, preventing any major breaking changes.

    Similarly, on many distros, you can configure them to only do the minimum security updates, while leaving other packages untouched.

    People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.

    I don’t really feel like reinstalling the bootloader over ssh, to a machine that doesn’t have a monitor, but you do you. There are real significant differences between stable and rolling release distros, that make a stable release more suited for a server, especially one you don’t want to baby remotely.

    I use arch. But the only reason I can afford to baby a rolling release distro is because I have two laptops (both running arch). I can feel confident that if one breaks, I can use the other. All my data is replicated to each laptop, and backed up to a remote server running syncthing, so I can even reinstall and not lose anything. But I still panicked when I saw that message suggesting that I should reinstall grub.

    That remote server? Ubuntu with unattended upgrades, by the way. Most VPS providers will give you a linux distro image with unattended security upgrades enabled because it removes a footgun from the customer. On Contabo with Rocky 9, it even seems to do automatic reboots. This ensures that their customers don’t have insecure, outdated binaries or libraries.

    Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.

    Docker is a way for me to run services on my server. Literally every other service application respects the firewall. Sometimes I want services to be exposed on my home network, but not on a public wifi, something docker isn’t capable of doing, but the firewall is. Sometimes I may want to configure a service while keeping it running. Or maybe I want to test it locally. Or maybe I want to use it locally

    It’s only docker where you have to deal with something like this:

    ---
    services:
      webtop:
        image: lscr.io/linuxserver/webtop:latest
        container_name: webtop
        security_opt:
          - seccomp:unconfined #optional
        environment:
          - PUID=1000
          - PGID=1000
          - TZ=Etc/UTC
          - SUBFOLDER=/ #optional
          - TITLE=Webtop #optional
        volumes:
          - /path/to/data:/config
          - /var/run/docker.sock:/var/run/docker.sock #optional
        ports:
          - 3000:3000
          - 3001:3001
        restart: unless-stopped
    

    Originally from here, edited for brevity.

    Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

    Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

    No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

    On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.


  • A tip I have is to move away from manjaro.

    When you use a rolling release, you lose one of the main features of stable release distros: Automatic, unattended upgrades. AFAIK, every stable release distro has those, and none of the rolling releases do (except maybe opensuses’s new slowroll and centos rolling, but I wouldn’t recommend or use them).

    Manjaro has other issues too, but that’s the big one.

    Although I use arch on my laptop, I run debian on my server because I don’t want to have to baby it, especially since I primarily access it remotely. Automatic upgrades are one less complication removed, allowing me to focus on my server itself.

    As for application deployment itself, I recommend using application containers, either via docker or podman. There are many premade containers for those platforms, for apps like jellyfin, or the various music streaming apps people use to replace spotify (I can’t remember any of the top of my head, but I know you have lots of options).

    However, there are two caveats to docker (not podman) people should know:

    • Docker containers don’t auto update. Although you can use something like watchtower to automatically update them. As for podman, podman has an auto update command you can probably configure to run regularly.
    • Docker bypasses your firewall. If you forward port 80, docker will go around the firewall and publish it. The reason for this is that most linux firewalls work by using iptables or nftables behind the hood, but docker also edits those directly… this has security implications, I’ve seen many container services people didn’t intend to put on the public internet, on there.

    Podman, however, respects your firewall rules. Podman isn’t perfect though, there are some apps that won’t run in podman containers, although my use case is a little more niche (greenbone service and vulnerability scanner).

    As for where to start, projects like linuxserver provide podman/docker containers, which you can use to deploy many apps fairly easily, once you learn how to launch apps with the compose file. Check out this nextcloud dockerized, they provide. Nextcloud is a google drive alternative, although sometimes people complain about it being slow… I don’t know about the quality of linuxserver’s nextcloud, so you’d have to do some research for that, and find a good docker container.




  • If I run two mysql containers, it won’t necessarily take twice the resources of a single mysql containers

    It’s complicated, but essentially, no.

    Docker images, are built in layers. Each layer is a step in the build process. Layers that are identical, are shared between containers to the point of it taking up the ram of only running the layer once.

    Although, it should be noted that docker doesn’t load the whole container into memory, like a normal linux os. Unused stuff will just sit on your disk, just like normal. So rather, binaries or libraries loaded twice via two docker containers will only use up the ram of one instance. This is similar to how shared libraries reduce ram usage.

    Docker only has these features, deduplication, if you are using overlayfs or aufs, but I think overlayfs is the default.

    https://moonpiedumplings.github.io/projects/setting-up-kasm/#turns-out-memory-deduplication-is-on-by-default-for-docker-containers

    Should you run more than one database container? Well I dunno how mysql scales. If there is performance benefit from having only one mysqld instance, then it’s probably worth it. Like, if mysql uses up that much ram regardless of what databases you have loaded in a way that can’t be deduplicated, then you’d definitely see a benefit from a single container.

    What if your services need different database versions, or even software? Then different database containers is probably better.






  • Somewhat related, there is a site I follow called royalroad. Royalroad is a site for web serials, which are basically books uploaded to the internet chapter by chapter.

    Although royalroad used to be only google ads, at some point they started accepting user submitted ads. (Also, ads on that site have always been unobtrusive).

    I like these ads much better because they are more privacy respecting (literally an a image and a link).

    Also, they are really funny. User’s with no art skills will make memes, or doodle stick figures, and I clicked on that one anyways, and the story was soooo good.


  • You want webtop: https://docs.linuxserver.io/images/docker-webtop

    But just like with kasm, not all software will work, although I think most will.

    About kasm:

    Not really. I don’t thing the default kasm images come with sudo or a root password, so you cant “sudo apt” or the like.

    If you do create a software image with sudo, them yes, but only for a single session, if you keep it long running. Every time you destroy the session completely it will be reset.

    Although, If you need software in your images, it’s better to just build your own docker images for use with kasm, that have everything you want.