• 0 Posts
  • 45 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?



  • I’ve started a similar process to yours and am moving domains as they come up for renewal, with a slightly different technical approach:

    • I’m using AWS Route 53 as my registrar. They aren’t the cheapest, but still work out at about half the price of Gandi and one of my key requirements was to be able to use Terraform to configure DS records for DNSSEC and NS records in the parent zone
    • I run an authoritative nameserver on an OCI free tier VM using PowerDNS, and replicate the zones to https://ns-global.zone/ for redundancy. I’m investigating setting up another authoritative server on a different cloud provider in case OCI yank the free tier or something
    • I use https://migadu.com/ for email

    I have one .nz domain which I’ll need to find a different registrar for, cos for some reason route53 doesn’t support .nz domains, but otherwise the move is going pretty smoothly. Kinda sad where Gandi has gone - I opened a support ticket to ask how they can justify being twice the price of their competitors and got a non-answer




    • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
    • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
    • A random minipc I got for free from work running VyOS as by border router
    • A Brocade ICX 6610-48p as core switch

    Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea




    • There has been some technical decisions over the last few years that I don’t think fit my needs terribly well; chief of these is the push for Snaps - they are a proprietary distribution format, that adds significant overhead without any real benefit, and Canonical has been pushing more and more functionality into Snap
    • I previously chose Ubuntu over Debian because I needed more up to date versions of things like Python and PHP, with Docker this isn’t really a concern any more, so slower, more conservative approach Debian takes isn’t as big of an issue


  • From the previous issue it sounds like the developer has proper legal representation, but in his place I wouldn’t even begin talking with Haier until they formally revoke the C&D, and provide enforceable assurances that they won’t sue in the future.

    Also I don’t know what their margins are like, but even if this cost them an extra $1000 in AWS fees on top of what their official app would have cost them (I seriously doubt it would be that much unless their infrastructure is absolute bananas), then it would probably only be a single-digit number of sales that they would have needed to loose to come out worse off from this.





  • Infrastructure as code/config as code.

    The configurations of all the actual machines is managed by Puppet, with all its configs in a git repo. All the actual applications are deployed on top of Kubernetes, with all the configurations managed by helmfile and also tracked in git. I don’t set anything up - I describe how I want things configured, and the tools do the actual work.

    There is a “cold start” issue in my scheme - puppet requires a server component that runs on Kubernetes but I can’t deploy onto kubernetes until the host machines have had their puppet manifests applied, but at that point I can just read the code and do enough of the config by hand to bootstrap everything up from scratch if I have to


  • So some fun facts:

    • If you buy enough licenses from Microsoft, instead of giving you a bunch of unique licence keys to keep track of, they will give you a license that you can install on a server, and a special “volume license key” that you use on every machine - then, instead of talking to Microsoft to activate themselves, they connect to your server which ensures that it is only activating as many machines as you have licenses for
    • These volume license keys are public knowledge to the point that Microsoft publish them on their site because they are useless unless you have a server to validate the activations
    • The server protocol is not complex, has been reverse engineered, and there are open source server implementations that forget the whole “ensure you have the right number of licenses” part



  • What do you mean by “increase security”? Security isn’t a thing where you get +5 points for every antivirus you have installed - it’s about risks, and how you mitigate them. A perfect antivirus isn’t going to protect you if you have a crappy password on something you forgot about, or if you are running software with a serious security vulnerability.