• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle




  • Declarative configuration of services and the rest of the entire system, and everything that brings with it.

    • Want to test some new service, or make changes to an existing one, but don’t know if you want to keep it? Sure, just temporarily switch to the new configuration, you can always switch back to the old one and everything will be back as it was.
    • Have multiple servers and want to share configuration between them? Absolutely, just import the same file from both. I have a git repo storing configurations for 10 machines and a huge part of it is shared configuration.
    • Want to use one service’s endpoint (such as a socket path) in another? Sure, just use the socket path configuration option for the first service in the configuration for the second, such as here. This works since everything is a single tree of options which all the service configuration files are then generated from, so interpolate stuff as you wish.
    • Checks for configuration correctness during build of the system (NixOS options are type checked during evaluation, and then during the actual system build there’s more checks, like nginx config has to succeed nginx -t, otherwise the system build fails and you can’t switch to it)
    • Want to spin up a VM to test changes before putting it on the actual target? There’s a builtin command (nixos-rebuild build-vm) that makes a script that starts a QEMU VM with your configuration running in it. It’s as fast as building the real system, so a couple seconds if you’re making small changes.
    • Setting up services is also often as easy as putting services.foo.enable = true; in your configuration. And, if you remove that line, the service is gone, so you’re never left with “the random package or file you installed once to test something and has been forgotten about”. That’s the biggest thing it has over any kind of imperative solution IMO.

    I feel like even if I want to distro hop again and end up putting something else on my desktop, NixOS is going to stay on my servers indefinitely. It’s pretty much a perfect fit for servers.





  • Since you mention nginx, I assume you’re talking about proxying HTTP and not SMTP/IMAP… For that, you have the X-Forwarded-For header which is exactly for that, retaining the real source IP through a reverse proxy.

    You should be able to add proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; to your location block.

    Alternatively, looks like there’s a Forwarded header (RFC from 2014) which I’ve never seen before but it seems cool: https://www.nginx.com/resources/wiki/start/topics/examples/forwarded/

    I guess it comes down to what mailu supports, I have never used that.

    If you are talking about SMTP and IMAP, I don’t think there’s a standard way to do this. You’d have to set up port forwarding on the VPS for the SMTP ports and IMAP port, and set up your home server to accept connections from any IP over the wireguard interface.

    That’s exceedingly horrible though and there’s a better option for SMTP at least: set up an MTA (e.g. Postfix) on the VPS and have it forward mail to the real destination server. And for outgoing mail it never has to touch your home server (except your client copying it into the Sent inbox over IMAP), just send it out over the VPS directly. Or if you’re using some builtin web client, I guess do set the MTA on your local server to send mail to the VPS’s MTA.


  • 2xsaiko@discuss.tchncs.detoSelfhosted@lemmy.worldShould I move to Docker?
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    7 months ago

    No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that’s what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don’t have to manually allocate ports for those services that can’t listen on unix sockets which always feels a bit yucky.

    1. A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don’t.
    2. Want to change a dockerfile and rebuild it? If it’s old and uses something like “ubuntu:latest” as a base and downloads similar “latest” binaries from somewhere, good luck getting it to build or work because “ubuntu:latest” certainly isn’t the same as it was 3 years ago.
    3. Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you’re working on is cross-platform. Lmao.) but I’ve had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won’t run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
    4. If you’re using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run systemd-analyze security UNIT.

    I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you’re hosting a few services, I don’t think it’s something you need.

    If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I’ve seen it described it as “docker + ansible on steroids”, but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won’t necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.


  • I’m eying servercheap.com and it says in description “1 IPv4”, but then it offers “Add’l Ipv4 Addresses” for 9$. I’m bit lost here and I’m not even sure do I need IPv4 address. Maybe I can run duckdns or ddclient to avoid additional cost?

    You should have an IPv4 address unless you’re sure everyone who needs to access it has working IPv6 access or you don’t mind setting up 6to4/6in4 at the locations that don’t (or complain to ISPs until they fix it). The one should be fine.





  • 2xsaiko@discuss.tchncs.detoSelfhosted@lemmy.worldRouters
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    11 months ago

    I have a Turris Omnia (https://turris.com). Comes with their custom OpenWrt out of the box so can do everything that can, with some extra features. Hardware is pretty good: two wifi cards, one of which can do 802.11ax, 6 GBit ethernet ports, 1 SFP port, 2GB RAM, 8GB EMMC flash, supports adding a PCIe SSD. You can also pretty easily install your own OS on it if you want to, personally I have it booting off of a PCIe SSD with NixOS on it.