• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle



  • I use proxmox to run debian VMs to run docker compose “stacks”.
    Some VMs are dedicated to an entire servicecs docker compose stack.
    Some VMs are for a docker compose of a bunch of different services.
    Some services are run across multiple nodes with HA VIPs and all that jazz for “guaranteed” uptime.
    I see the guest VM as a collection, but there is only ever 1 compose file per host.
    Has a bit of overhead, but makes it really easy to reason about, separate VLANs and firewall rules etc



  • Generally, UPS (lead acid) batteries are not designed for long-cycle deep discharge.
    They are designed to hold their rated load for a minute or so until the power is restored (generators start, power-uncuts) or the servers have a chance to shut down.
    But maybe thats dated information, and modern UPSs are designed to run from batteries for a few hours.





  • Oh, just saw this:

    Could I instead have told Sonarr qBit is at 172.18…:port(dockers network address)

    TL:dr;
    No, the host has no idea what happens inside a docker network.
    The exception is if the containers are on the same host and joined to the SAME docker network (docker compose does this automatically)


    It seems like your home network is on 192.168.something. Youve omitted any details to describe what subnet it is within an entire 182.168.0.0/16 block that is dedicated to local network addresses (rfc1918) but that doesnt matter. And docker uses a different dedicated block of 172.16.0.0/12.
    Regardless!

    Your host has an ip of 192.168.1.4. A client on 192.168.1.5 knows exactly how to communicate to 192.168.1.4 (provided they are in the same subnet… Which is likely on a standard home DHCP served network. Im glossing over this).
    Googles DNS server is 8.8.8.8. Which is outside of your home networks subnet (192.168.1.0/24 in CIDR notation). So client 192.168.1.5 has no idea how to contact 8.8.8.8. So it sends the connection to its default gateway (likely 192.168.1.1) as it is an unknown route. Your router then sends it to the internet appropriately (doing NAT as described elsewhere).

    What Im saying is that clients within the 192.168.1.0/24 network know how to talk to eachother. If they dont know how to talk to an IP, they send to the gateway.

    Now, docker uses its own internal network: 172.16.0.0/12. To a client on 192.168.1.5/24, an ip inside 172.16.0.0/12 is as strange as 8.8.8.8/32. It has no idea where to send it, so it goes to the default gateway. Which isnt helpful if that network is actually INSIDE the host at 192.168.1.4/24.

    What am i getting at? Docker runs its own NAT.
    It takes the host’s ip address. When you expose a containers port, you are telling docker to bind a host port and forward it to the specific port of the specific container.
    So outside of the host, the network has no idea what 172.16.0.0/12 means, but it does know what 192.168.1.4/24 means.
    Inside the docker network, a container has no idea what 192.168.0.0/16 means, but does know 172.16.0.0/12 means. Equally, a docker container will send packets to its default gateway inside that 172.16.0.0/12… Which will then respond aporopriately to the 192.168.1.0/24 client.
    Which means a dcoker containers host firewall is going to have no idea whats happening inside a docker network. All it knows is that docker wants to recieve information on port 443, and that the local network is 192.168.1.0/24. … Ish, there are other configurations


  • Basically, what they are getting at is:
    Have you allowed internet access TO arr?

    A default config ISP router will take the public IP address and drop all incomming connections. It will then NAT internal IP addresses to the public IP addresses.
    So when you go to Google, Google responds to the established connection coming from the routers public IP address. Your router then knows to forward that response to the local client that started the connection.
    If Google just randomly decided to connect to your public IP address, your router is configured to drop that traffic.

    If you set up port forwarding on your router, you are telling it “if you get a new connection on port 443, forward it to this local client”. This is exposing that client to the internet and allowing strangers to connect to it. If Google then tried to connect to your public ip:443, it would get the response from that local client.
    If you set up a “dmz” client, the router will forward ALL unknown incoming connections to that client. There is no need to do this. The only exception is for research or as a hunnypot/tarpit.

    All other traffic will be on the local network, and wont even touch the routers firewall. A connection from 192.168.0.12 to 192.168.0.200 will go through layer 2 (ie, switches) instead of layer 3 (ie, routing) of the network OSI layers.

    So, if you trust your internal home network and you have not exposed anything to the internet (port forwarding on the router, or set up a DMZ client) then you dont really need internal firewalls: the chance of a malicious device being able to even connect to an arr service is vanishingly small - like, your arr service will be the least of your concerns.
    When you expose arr to the internet (i wouldnt do it directly, use a VPN or similar as a secure hole through your home firewall) THEN you need to address internal firewalls.

    If you feel you do need them, then go about it for learning purposes and take your time. Do things, break things, learn things, fix things.
    In an ideal scenario, security would be in many layers, connections would all be TLS with client certificate trust, etc etc.
    But for a server on your home network serving only local clients… Why bother worrying about it until you want to learn it properly!


  • 5? Holy heck, that’s amazing. I remember helping people that had built streaming rigs to use during the pandemic, and wondering why their production was stuttering and having issues with a bunch remote callers. Some of that work ended up being CPU bound.
    Although, looks like that patch is for Linux? Not much use if your running vmix or some other windows-only software.
    In OPs case, however, that’s not a problem


  • If you are doing high bandwidth GPU work, then PCIe lanes of consumer CPUs are going to be the bottleneck, as they generally only support 16 lanes.
    Then there are the threadrippers, xeons and all the server/professional class CPUs that will do 40+ lanes of PCIe.

    A lane of PCIe3.0 is about 1GBps (Byte not bit).
    So, if you know your workload and bandwidth requirements, then you can work from that.
    If you don’t need full 16 lanes per GPU, then a motherboard that supports bifurcation will allow you to run 4 GPUs with 4 lanes each from a CPU that has 16 lanes if PCIe. That’s 4GBps per GPU, or 32Gbps.
    If it’s just for transcoding, and you are running into limitations of consumer GPUs (which I think are limited to 3 simultaneous streams), you could get a pro/server GPU like the Nvidia quadros, which have a certain amount of resources but are unlimited in the number of streams it can process (so, it might be able to do 300 FPS of 1080p. If your content is 1080p 30fps, that’s 10 streams). From that, you can work out bandwidth requirements, and see if you need more than 4 lanes per GPU.

    I’m not sure what’s required for AI. I feel like it is similar to crypto mining, massive compute but relatively small amounts of data.

    Ultimately, if you think your workload can consume more than 4 lanes per GPU, then you have to think about where that data is coming from. If it’s coming from disk, then you are going to need raid0 NVMe storage which will take up additional PCIe lanes.



  • It opens users to timing attacks.
    If there are 10000 notifications per second. And across 100 incidents user A does something to cause a notification and user B receives a notification within network latency time periods, it is likely user A is talking to user B.
    Whilst that seems like arbitrarily useless data, having this at the giga/peta scale that the US government is processing it, you can quickly build a map of users “talking” to users.
    Now, this requires the help of other parties. You need to know that user A is using WhatsApp at the time. And yeh, you don’t know what the message is, but you know that they are hitting WhatsApps servers. And you know that within 5 minutes of User B receiving a notification, they are also then contacting WhatsApp servers.
    So now you know that user A is likely talking to user B via WhatsApp.
    And also user G, I X and M are also involved in this conversation.
    And you bust user G on some random charge. And suddenly warrants are issued for more detailed examination of users A, B, I, X and M.
    Maybe they have nothing to hide and are just old college friends. Or maybe they are a drug ring, or whatever.

    It’s all the “I have nothing to hide”, phones being tied to a person, privacy and all that.
    We can’t really comprehend the data warehouse/lake/ocean level of scale required to realise what all the little pieces of meta data and tracking information being able to add up to “User A is actually this person right here right now and they bought a latte at Starbucks and got 5 loyalty points” level of tracking.

    Is it likely this bad?
    Probably.
    Theres the “Target knows I’m pregnant before told anyone” story.
    https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/

    That’s over a decade ago. It’s not let off. And you can bet that governments are operating at a level a few years beyond private industry.

    So yeh, every bit of metadata counts