• 0 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • I think a VPS and moving to NetBird self hosted would be the simplest solution for you. $5 per month gives you a range of options and you can go even lower with things like yearly subscriptions. That way you get around the subdomain issue, you get a proper tunnel and can proxy whatever traffic you want into your home.

    As for control scheme for your home automation you’ll need to come up with something that fits you but I strongly advise against letting users into Home Assistant. You could build a simple web interface that interacts via API with HA, through Node-Red is super simple if it seems daunting to build the API.

    If a RPi 4 is what you’ve got and that’s it then I guess you’re kinda stuck for the time being. Home Assistant is often quite lightweight if you’re not doing something crazy so it runs well on even a RPi 3, same with NAS software for home use, it too works fine on a 3. If SBC is your style my recommendation is to setup an alert on whatever second hand sites operate in your area and pick up a cheap one to allow you to separate things and make the setup simpler.


  • That’s one part of it, but the other is that there’s no proper way to ensure you won’t cause issues down the line and it makes the configuration unclean and harder to maintain.

    It also makes your setup dependent on seemingly unrelated things. Like the certificate for the domain which is some completely different applications problem but will break your Home Assistant setup all the same. That dependency issue can be a nightmare to troubleshoot in some instances, especially when it comes to stuff like authentication. Try doing SSO towards two different applications running on different subpaths on the same domain…


  • ninjan@lemmy.mildgrim.comtoSelfhosted@lemmy.worldI love Home Assistant, but...
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    8 months ago

    I can’t grasp your use case I feel, pretty much all your complaints seem… odd. To me at least.

    First subdomain. I think HA is completely right that proxy with a subpath is basically an anti-pattern that just makes things worse for you and is always a bad idea (with very few exceptions).

    As for your tunnel I don’t know how you’ve set it up and I haven’t used tailscale but them only allowing one domain sounds like a very arbitrary limit, is it something that costs money to add? I use NetBird which I selfhost on my VPS and from there tunnel into my much beefier home setup.

    Then docker in HAOS. The proper way I feel of running HA is for sure HAOS, and also running it in its own VM / or on dedicated hardware. This because you will likely need to couple additional hardware like a stick providing support for more protocols like ZigBee or Matter. It really isn’t a good solution for running all your self hosted stuff, and wasn’t ever intended to be. Running Plex in HA for instance is just a plain bad idea, even if it can be done. As such the need for an external drive seems strange as well. If you need to interact with storage you should set up a NAS and share over SAMBA. All this to say that HA should be one VM/Device, your docker environment another VM.

    As for authentication there are 10k plus contributors to Home Assistant yearly but very few bother to make authentication more streamlined. I would’ve loved OpenID/OAuth2 support natively but there are ways to do so with custom components and in the end I quite strongly feel that if the end-users of your smarthome setup (i.e. the wife and kids) need to login to Home Assistant then you’ve probably got more work to do. Remote controls which interact with HA handle the vast majority of manual interaction and I’ve dabbled with self-hosted voice interfaces for the more complex operations.

    Sorry if this came across as writing you on the nose, that’s not my intention. I just suspect you’re making things harder for yourself and maybe have a strange idea around how to selfhost in general?


  • Well, as someone also self-hosting email I agree with his solutions but he paints a picture of how bad it is that I feel is a bit exaggerated. But then again I host for myself and my family, I suspect it gets a bit different when you have many users and send hundreds of mail per day.

    Only one I’ve had trouble with it Microsoft, they’re the strictest and you need to get some support from them to make it work reliably. Google has an automated service.




  • ninjan@lemmy.mildgrim.comtoSelfhosted@lemmy.worldmultimedia manager by series
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    8 months ago

    Doesn’t sound like it’s own “product” to be honest. I’d probably look at an alternative presentation layer that can present what’s in Jellyfin and also supports being the presentation layer for the top solutions for books, comics etc. If nothing like that exists I think there are people that would be interested in a unified media presenter. It doesn’t even need to actually play the media, just link to it.





  • If you can fool the Internet that traffic coming from the VPS has the source IP of your home machine what stops you from assuming another IP to bypass an IP whitelist?

    Also if you expect return communication, that would go to your VPS which has faked the IP of your home machine. That technique would be very powerful to create man in the middle attacks, i.e. intercepting traffic intended for someone else and manipulating it without leaving a trace.

    IP, by virtue of how the protocol works, needs to be a unique identifier for a machine. There are techniques, like CGNAT, that allows multiple machines to share an IP, but really it works (in simplified terms) like a proxy and thus breaks the direct connection and limits you to specific ports. It’s also added on top of the IP protocol and requires specific things and either way it’s the endpoint, in your case the VPS, which will be the presenting IP.


  • Preserve the source IP you say, why?

    The thing is that if you could (without circumventing the standards) do so then that implies that IP isn’t actually a unique identifier, which is needs to be. It would also mean circumventing whitelists / blacklists would be trivial (it’s not hard by any means but has some specific requirements).

    The correct way to do this, even if there might be some hack you could do to get the actual source IP through, is to put the source in a ‘X-Forwarded-For’ header.

    As for ready solutions I use NetBird which has open source clients for Windows, Linux and Android that I use without issues and it’s perfectly self-hostable and easy to integrate with your own IDP.


  • No the scenario a VM protects from is the T110s motherboard/cpu/PSU/etc craps out and instead of having to restore from off-site I can move the drives into another enclosure and then map them the same way to the VM and start it up. Instead of having to wait for new hardware I can have the fileserver up and running again in 30 minutes and it’s just as easy to move it into the new server once I’ve sourced one.

    And in this scenario we’re only running the fileserver on the T110, but we still virtualized it with proxmox because then we can easily move it to new hardware without having to rebuild/migrate anything. As long as we don’t fuck up the drive order or anything like that, then we’re royally fucked.


  • Yes, but in the post they also stated what they were working with in terms of hardware. I really dislike giving the advice “buy more stuff” because not everyone can afford to when selfhosting often comes from a frugal place.

    Still you’re absolutely not wrong and I see value in both our opinions being featured here, this discussion we’re having is a good thing.

    Circling back to the VM thing though, even if I had dedicated hardware, if I would’ve used an old server for a NAS I still would’ve virtualized it with proxmox if for no other reason than that gives me mobility and an easier path to restoration if the hardware, like the motherboard, breaks.

    Still, your advice to buy a used server is good and absolutely what the OP should do if they want a proper setup and have the funds.


  • Sure, I’m not saying its optimal, optimal will always be dedicated hardware and redundancy in every layer. But my point is that you gain very little for quite the investment by breaking out the fileserver to dedicated hardware. It’s not just CPU and RAM needed, it’s also SATA headers and an enclosure. Most people doing selfhosted have either one or more SBCs and if you have more than one SBC then yeah the fileserver should be dedicated. The other common thing is having an old gaming/office PC converted to server use and in that case Proxmox the whole server and run NAS as a VM makes the most sense instead of buying more hardware for that very little gain.


  • There’s absolutely no issues whatsoever with passing through hardware directly to a VM. And Virtualized is good because we don’t want to “waste” a whole machine for just a file server. Sure dedicated NAS hardware has some upsides in terms of ease of use but you also pay an, imo, ridiculous premium for that ease. I run my OMV NAS as a VM on 2 cores and 8 GB of RAM (with four hard drives) but you can make do perfectly fine on 1 Core and 2 GB RAM if you want and don’t have too many devices attached / do too many iops intensive tasks.


  • Well good part there is that you can build everything for internal use and then add external access and security later. While VLAN segmentation and overall secure / zero-trust architecture is of course great it’s very overkill for a selfhosted environment if there isn’t an additional purpose like learning for work or you find it fun. The important thing really is the shell protection, that nothing gets in. All the other stuff is to limit potential damage if someone gets in (and in the corporate world it’s not “if” it’s “when”, because with hundreds of users you always have people being sloppy with their passwords, MFA, devices etc.). That’s where secure architecture is important, not in the homelab.


  • My best advice is use that your old setup hasn’t died yet while you can. I.e. start now and setup Proxmox because it’s vastly superior to TrueNAS for the more general type hardware you have and then run a more focused NAS project like Openmediavault in a proxmox VM.

    My recommendation, from experience, would be to setup a VM for anything touching hardware directly, like a NAS or Jellyfin (if you want to have GPU assisted transcoding) and I personally find it smoothest to run all my Docker containers from one Docker dedicated VM. LXCs are popular for some but I strongly dislike how you set hardware allocations for them, and running all Docker containers in one LXC is just worse than doing it in a VM. My future approach will be to move to more dedicated container setup as opposed to the VM focused proxmox but that is another topic.

    I also strongly recommend using portainer or similar to get a good overview of your containers and centralize configuration management.

    As for external access all I can say is do be careful. Direct internet exposure is likely a really bad idea unless you know what you’re doing and trust the project you expose. Hiding access behind a VPN is fairly easy if your router has a VPN server built in. And WireGuard (like Netbird / tailscale / Cloudflare tunnels etc all use) is great if not.

    As for authentication it’s pretty tricky but well worth it and imo needed if you want to expose stuff to friends/family. I recommend Authentik over other alternatives.



  • A lot of stuff runs great on SBCs, it’s just that they’re not as smooth to manage as a Proxmox server running containers or VMs. You also need several SBCs to reach the scale of what many do here on selfhosted and once you reach 4+ SBCs the old x86 server starts looking cost effective all of a sudden. The biggest benefit though is the no noise and very low power consumption, which is great for stuff that will be powered on 24/7/365.

    Really a mix is ideal, so you can get the benefits of cheap running costs of SBCs and the power and versatility of x86 for the tasks that require it.