• 3 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 24th, 2023

help-circle





  • Good suggestions at the bottom.

    There are several indications which could be used to discover the attack from day 1:

    All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names

    Limit validation methods and set exact account identifier which could issue new certificates with Certification Authority Authorization (CAA) Record Extensions for Account URI and Automatic Certificate Management Environment (ACME) Method Binding (RFC 8657) to prevent certificate issue for your domain using other certificate authorities, ACME accounts or validation methods





  • I’ll try to answer the specific question here about importing data and sandboxing. You wouldn’t have to sandbox, but it’s a good idea. If we think of a Docker container as an “encapsulated version of the host”, then let’s say you have:

    • Service A running on your cloud
    • Requires apt-get install -y this that and the other to run
    • Uses data in /data/my-stuff
    • Service B running on your cloud
    • Requires apt-get install -y other stuff to run
    • Uses data in /data/my-other-stuff

    In the cloud, the Service A data can be accessed by Service B, increasing the attack vector of a leak. In Docker, you could move all your data from the cloud to your server:

    # On cloud
    cd /
    tar cvfz data.tgz data
    # On local server
    mkdir /local/server/
    cd /local/server
    tar xvfz /tmp/data.tgz ./
    # Now you have /local/server/data as a copy
    

    You’re Dockerfile for Service A would be something like:

    FROM ubuntu
    RUN apt-get install -y this that and the other
    RUN whatever to install Service A
    CMD whatever to run
    

    You’re Dockerfile for Service B would be something like:

    FROM ubuntu
    RUN apt-get install -y other stuff
    RUN whatever to install Service B
    CMD whatever to run
    

    This makes two unique “systems”. Now, in your docker-compose.yml, you could have:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data:/data
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data:/data
    

    This would make everything look just like the cloud since /local/server/data would be bind mounted to /data in both containers (services). The proper way would be to isolate:

    version : '3.8'
    
    services:
      
      service-a:
        image: service-a
        volumes:
          - /local/server/data/my-stuff:/data/my-stuff
    
      service-b:
        image: service-b
        volumes:
          - /local/server/data/my-other-stuff:/data/my-other-stuff
    

    This way each service only has access to the data it needs.

    I hand typed this, so forgive any errors, but hope it helps.











  • I prefer query builders like slonik, or just raw. Prisma does crazy stuff with joins which turns what should be a simple query into 300 queries. Its a well documented problem in their issue tracker. I’ve not worked on a single repo that didn’t eventually move away from it with growth, including in a professional capacity. On top of that, you put in an ORM and everyone ends up using the same DB anyway, so you lose out on potential optimizations.