• 2 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • This is standard, but often unwanted, behavior of docker.

    Docker creates a bunch of chain rules, but IIRC, doesn’t modify actual incoming rules (at least it doesn’t for me) it just will make a chain rule for every internal docker network item to make sure all of the services can contact each other.

    Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

    I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

    Option to disable this behavior would be 100x better then current, but what do I know lol


  • I have. I use it for all of my home projects

    Kanban, Gantt charts, milestones, idea collections, file uploading, retrospectives, time tracking, documentation, etc… all supported with the selfhosted version.

    These are the “premium” features:

    • Custom fields
    • Pomodoro timer
    • Whiteboard
    • Program plans (I really don’t understand what is different about this than goals + milestones + documentation + tasks)
    • Strategies (pretty much just collecting and categorizing goals it seems)

    https://i.imgur.com/T6bSIhK.png

    I hope they don’t remove features and make people pay for them. It has plenty of features to make it useful now, but if they start removing them, then I think i will have to find another solution.


  • The problem is that for most self-hosters, they would be working and unavailable to do a graceful shutdown in any case even if they had a UPS unless they work fully from home with 0 meetings. If they are sleeping or at work, (>70% of the day for many or most) then it is useless without graceful shutdown scripts.

    I just don’t worry about it and go through the 10 minute startup and verification process if anything happens. Easier to use an uptime monitor like uptimekuma and log checker like dozzle for all of your services available locally and remotely and see if anything failed to come back up.




  • Intel Arc GPU. Had to enable a few modules, reboot, debug, follow the jellyfin docs for writing to some configs, reboot, didn’t work. Follow the error messages which are pretty much useless, get pointed to stuff that isn’t relevant. Finally someone on a forum had a good reply where they told me I have to download the entire linux proprietary firmware directory, extract the i915 folder from it, and plop it in my firmware folder and reboot. Then everything loaded and hwacceleration worked.



    • Ryzen 2700X on a gigabyte B450i

    • Arc A380

    • 2 mirrored 4TB HDDs and 1 12 TB HDD, luks encrypted and on 2 zpools (I have an “unsafe” mount path for data on a single drive like media)

    • removable flash drive with boot partition and main SSD keyfile

    -Zwave dongle

    That’s it.

    I can run everything I need to on it and my home internet is only 100/30 still because I don’t live in a city, so 2.5gig networking isn’t worth the cost. a380 does all of the hardware transcoding I need at a fairly low power. It isn’t as good as just getting a newer NUC, but it was cheaper and a fun project.

    Also doing a full renovation, so KNX will be connected for home assistant to control my lights and things and my smart home stuff will probably balloon.


  • For sure, but the point is that it isn’t integrated into homeassistant.

    For many people, they want to do everything from homeassistant. You can always have kludged together solutions. I edit my configs with VIM and backup to my central backup location via an automation. However, this is doing things outside of homeassistant that many people find inconvenient.





  • I use a similar setup, but use a USB for my boot drive that has the lvs partition encryption keyfile. I find it much handier since my computer is not near my server. I can boot and then walk upstairs and it is ready, and remove the USB later.

    Then there is no way to brute force the decryption or get a password out of me. Also, when the USB is removed and put in a safe place, there is no way to modify the boot partition or UEFI either.

    Then I have a password encryption on my data harddrives that I don’t know the password to, but is on my password manager.

    The thing about being paranoid about this stuff is that I probably focused on the wrong thing. A smash & grab is completely protected against, but that is like a 0.1% chance anyway and a 0.1% chance on top of that 0.1% chance that it would be targeted enough that they would even try to decrypt it.

    Full disk encryption is really only usefully at all for an unpowered system. Network hardening will probably take care of 99.99% of attack attempts where encryption is 0.01%.

    Even for a laptop, if it gets stolen in public, it is still running and can have the keys extracted or break into the running system if someone really wants to hack it. They wouldn’t even try to reboot and break the disk encryption probably…

    Too much info, but I guess I am just rambling about how dumb my approach probably is 😅






  • That is not feasible for many/most people.

    Upload speeds of the average person make general internet use while connected to a home VPN much worse. For example, my mobile nework is at least 10x faster than my home network upload speed if I am in a place with 5g. I’d much rather connect to my paid VPN provider where the speed difference is barely noticable.

    Not to mention even if people are using a VPS, it might be very far away and severely impact speeds.


  • Everything you want is definitely possible for the budget.

    I used an old I5 laptop with 4GB of RAM for a year or two. If you need a lot of storage, an old HDD will be fine usually. A raspberry pi 4 or 5 will be slower, but would still work, but if Norway prices are anything like belgium, an old I7 laptop sips power and will save money in electric costs

    A few tips:

    • Run nextcloud all-in-one or spend some time optimizing nextcloud. It will help performance a lot

    • Unless you are a serious photographer, use Immich, 100%. Immich is a google photos replacement that has a bunch of good user features like accounts and good security and sharing that photoprism just doesn’t. Photoprism is really geared towards professional photographers.

    • transmission + wireguard container for a VPN is the way to go …

    • radarr/sonarr/lidarr & prowlarr are good to use with transmission



  • I starter my home server with a laptop. I did nextcloud, paperless, jellyfin + *arr services, photoprism, and a few others.

    Not having control over your network is the biggest hurdle because you kind of need a fixed IP to access it.

    However, there are some services to broadcast your hostname to the local network (e.g. so you can log in with serveruser@myserver over SSH).

    You may be able to use that to access your containers from the network, but just keep in mind that other users on the local network can also access your server.