Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @dan@d.sb

  • 5 Posts
  • 249 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle



  • Energy consumption is essentially the same, as it’s using the same radios.

    For what it’s worth, I have several SSIDs, each on a separate VLAN:

    • my main one
    • Guest. Has internet access but is otherwise isolated - Guest devices can’t communicate with other guest devices or with any other VLANs.
    • IoT Internet: IoT and home automation devices that need internet access. Things like Ecobee thermostat, Google speakers, etc
    • IoT No Internet: Home automation stuff that does not need internet access. Security cameras, Zigbee PoE dongle (SLZB-06), garage door opener, ESPHome devices, etc

    (to remotely access home automation stuff, I use Home Assistant via a Tailscale VPN)

    Most of these have both 2.4Ghz and 5Ghz enabled, with band steering enabled to (hopefully) convince devices to use 5Ghz when possible.

    This is on a TP-Link Omada setup with 2 x EAP670 ceiling-mounted access points. You can create up to 16 SSIDs I think.







  • At home - Networking

    • 10Gbps internet via Sonic, a local ISP in the San Francisco Bay Area. It’s only $40/month.
    • TP-Link Omada ER8411 10Gbps router
    • MikroTik CRS312-4C+8XG-RM 12-port 10Gbps switch
    • 2 x TP-Link Omada EAP670 access points with 2.5Gbps PoE injectors
    • TP-Link TL-SG1218MPE 16-port 1Gbps PoE switch for security cameras (3 x Dahua outdoor cams and 2 x Amcrest indoor cams). All cameras are on a separate VLAN that has no internet access.
    • SLZB-06 PoE Zigbee coordinator for home automation - all my light switches are Inovelli Blue Zigbee smart switches, plus I have a bunch of smart plugs. Aqara temperature sensors, buttons, door/window sensors, etc.

    Home server:

    • Intel Core i5-13500
    • Asus PRO WS W680M-ACE SE mATX motherboard
    • 64GB server DDR5 ECC RAM
    • 2 x 2TB Solidigm P44 Pro NVMe SSDs in ZFS mirror
    • 2 x 20TB Seagate Exos X20 in ZFS mirror for data storage
    • 14TB WD Purple Pro for security camera footage. Alerts SFTP’d to offsite server for secondary storage
    • Running Unraid, a bunch of Docker containers, a Windows Server 2022 VM for Blue Iris, and an LXC container for a Bo gbackup server.

    For things that need 100% reliability like emails, web hosting, DNS hosting, etc, I have a few VPSes “in the cloud”. The one for my emails is an AMD EPYC, 16GB RAM, 100GB NVMe space, 10Gbps connection for $60/year at GreenCloudVPS in San Jose, and I have similar ones at HostHatch (but with 40Gbps instead of 10Gbps) in Los Angeles.

    I’ve got a bunch of other VPSes, mostly for https://dnstools.ws/ which is an open-source project I run. It lets you perform DNS lookup, pings, traceroutes, etc from nearly 30 locations around the world. Many of those are sponsored which means the company provides them for cheap/free in exchange for a backlink.

    This Lemmy server is on another GreenCloudVPS system - their ninth birthday special which has 9GB RAM and 99GB NVMe disk space for $99 every three years ($33/year).



  • I’d recommend building your own server rather than buying an off-the-shelf NAS. The NAS will have limited upgrade options - usually, if you want to make it more powerful in the future, you’ll have to buy a new one. If you build your own, you can freely upgrade it in the future - add more memory (RAM), make it faster by replacing the CPU with a better one, etc.

    If you want a small one, the Asus Prime AP201 is a pretty nice (and affordable!) case.


  • dan@upvote.autoSelfhosted@lemmy.worldRecommendation for NAS
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 months ago

    Modern clients support most of the modern codecs, so codec support isn’t as bad as in the old days when we had to use sketchy codec packs.

    I mentioned the location because the primary reason to transcode is that you don’t have enough bandwidth to stream the original file. That’s not an issue over a LAN.


  • I personally prefer Docker over LXC since the containers are essentially immutable. You can completely delete and recreate a container without causing issues. All your data is stored outside the container in a Docker volume, so deleting the container doesn’t delete your volume. Your docker-compose describes the exact state of the containers (as long as you use version numbers rather than tags like latest)

    Good Docker containers are “distroless” which means it only contains the app and the bare minimum dependencies for the app to run, without any extraneous OS stuff in it. LXC containers aren’t as light since as far as I know they always contain an OS.



  • I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)

    Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.



  • What’s your actual end goal? What are you trying to protect against? Do you only want certain systems on your network to be able to access your apps? There’s not really much of a point of a firewall if you’re just going to open up the ports to the whole network.

    If you want it to be more secure then I’d close all the ports except for 22 (SSH) and 443 (HTTPS), stick a reverse proxy in front of everything (like Nginx Caddy, Traefik, etc), and use Authentik for authentication, with two-factor authentication enabled. Get a TLS certificate using Let’s Encrypt and a DNS challenge. You have to use a real domain name for your server, but the server does not have to be publicly accessible - Let’s Encrypt works for local servers too.

    The LinuxServer project has a Docker image called “SWAG” that has Nginx with a bunch of reverse proxy configs for a bunch of common apps. Might be a decent way to go. The reverse proxy should be on the same Docker network as the other containers, so that it can access them directly even though you won’t be exposing their ports any more.

    Authentik will give you access controls (eg to only allow particular users to access particular apps), access logs for whenever someone logs in to an app, and two-factor auth for everything. It uses OIDC/OAuth2 or SAML, or its own reverse proxy for apps that don’t support proper auth.


  • dan@upvote.autoSelfhosted@lemmy.worldSecond hand disks?
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    9 months ago

    If you keep an eye out for sales, you can get new drives for not much more than used. I got two Seagate Exos X20 20TB drives for around US$240 each on sale. One from Newegg and one from ServerPartDeals.

    Regardless of if you buy new or used, buy the drives from multiple different suppliers as it makes it likely that they’ll come from two different batches. You don’t want an array where all drives came from the same batch since it increases risk (if there was a manufacturing issue with that batch, it’s possible all drives will fail in the same way)


  • I used to use mdadm, but ZFS mirrors (equivalent to RAID1) are quite nice. ZFS automatically stores checksums. If some data is corrupted on one drive (meaning the checksum doesn’t match), it automatically fixes it for you by getting the data off the mirror drive and overwriting the corrupted data. The read will only fail if the data is corrupted on both drives. This helps with bitrot.

    ZFS has raidz1 and raidz2 which use one or two disks for parity, which also has the same advantages. I’ve only got two 20TB drives in my NAS though, so a mirror is fine.