So, I’m trying to roll a K3s cluster, basically to learn new stuff (yup, I definitely don’t need it, but I’d like to).

I imagined it to be composed by a master plane (virtualized in a server in my house) and a node worker (on a Hetzner VPS). The worker node should, for the moment, only serve a reverse proxy to route connections to my home server (which is gonna host Synapse, Mastodon and some things, perhaps) and later on maybe SSO and other thingies.

This is my problem: I guess the best way to route the traffic between the w.n. and the m.p. would be a Wireguard tunnel, which, I suppose, I could directly manage via kubectl. You see where this is going. Until I a have a working tunnel I cannot manage the remote working node, but at the same time I need a tunnel to be able to connect to said working node.

So, I guess my idea has some flaws. How could I manage to do it?

  • iluminae@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 months ago

    K8s has a mild solution to chicken and egg situations for nodes - the nodes support ‘static manifests’ which can be pods they know how to bring up before ever connecting to the API server. So you could have your wireguard peer be brought up this way. Downside is while those static manifests show up in k8s APIs, they aren’t fully manageable since they are defined by files on disk.

  • johntash@eviltoast.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    Make sure you read up on latency requirements for k8s too. It’s definitely doable but etcd for example needs really low latency between nodes to not cause issues by default.

    If you only have one master like you described, it’d probably be fine.

    • RegalPotoo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?

      • Big P@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I’ve done it with docker swarm and it was awful, the connection latency would break the cluster constantly

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    If you use Tailscale or ZeroTier, you can use the private IP address from those services as node’s internal IP in k3s/k8s configuration, and the nodes will connect to the control plane via Tailscale/ZeroTier network.