• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • Can try installing Avahi on the RPi (may come on the default image). It will advertise .local over mDNS / DNS-SD. I believe Avahi will advertise on link local if there is no default route to the internet.

    Your system may automatically resolve the domain if its able to pickup the mDNS records to SSH in. Been a couple years since I’ve done it, so I could be forgetting a nuanced detail, but I vaguely remember just ‘plug and play’ if internet for the RPi wasn’t required.


  • My NAS is an mATX mobo with an i5, 64G RAM, 8 disk drives, 3 nvme drives, and an ARC GPU for video transcoding.

    Disk drives are all mirrored. One nvme runs NixOS which is easy enough to redeploy if the drive dies. One nvme is cache on top of the disk drives. Last nvme I use for temp fast storage like Jellyfin transcoding.

    Its more of a combo NAS/server as I run most self hosted apps on it (tor node, monero node, jellyfin, *arr stack, etc).




  • Jellyfin recommends not using SBCs. I was in the same boat as you a month ago. Started on an RPi. Works fine for raw (no transcoding). Poor performance if you do any scrubbing or try to watch something while new content is processing. Got a mini PC. It was better but its basically a laptop chipset, so still not the best experience. Had other things I wanted to do on my self-hosted setup so decided to just bite the bullet and make a proper build: 12th gen i5, Intel Arc GPU, 4+8 SATA ports with PCI card, 3xNVME, 10xHDD/SSD case. Can’t speak to the performance yet. Learning Ansible to automate managing it including installing the OS.

    I would stay away from NAS systems like QNAP or Synology. They tend to not be much better than a SBC.

    For the budget constraints I would just echo getting the cheapest desktop-class PC you can get your hands on in a suitable form factor.

    https://jellyfin.org/docs/general/administration/hardware-acceleration/#hardware-acceleration-on-docker-linux

    While hardware acceleration is supported on Raspberry Pi hardware, it is recommended that Jellyfin NOT be hosted on Raspberry Pis or other SBCs. Many hardware acceleration features are not supported and will fallback to software. In addition, they are generally too slow to provide a good experience when transcoding is needed. Please consider getting a more powerful system to host Jellyfin.




  • Not sure what your environment is. I can tell you what I do in linux/android.

    I use backblaze b2 for my cloud storage.

    I use rclone to create two encrypted “remotes”: one on my local file system and one for b2. Rclone supports a bunch of cloud providers, so you don’t have to use b2.

    I mount the encrypted local file system and use whatever app (e.g., paperless) to access the files like it was any other directory.

    When I’m done I unmount it and sync it with the b2 encrypted remote.

    I use Round Sync on android which is rclone with a mobile GUI to access the same files. Also works great for backing up my phone.

    For docker access to the mount point, either run the docker daemon as your current user, enable root access to rclone’s fuse mounts, or my preferred is to remount (with root access) a scoped directory for that docker container using something like bindfs.

    Just be aware if using the vfs-cache (needed for seek or append), that cache is stored decrypted in your home folder. I’ve been meaning to look into locking it down with apparmor or something.


  • Round Sync with whatever remote (backend provider) you want supported by the underlying library rclone. They have self hosted remote options like FTP or something. If you want off site and privacy, create a crypted remote pointed to your off site backend remote. It acts as a wrapper to do end-to-end encryption.

    Round Sync isn’t in any app store, so I’d set a watcher for releases on Github. You can setup scheduled backups or restores in Round Sync to keep things moving between your phone and backend remote as a relatively set it and forget it for one way syncing.

    It has some nuances with bidirectional syncing that can result in data loss if not careful. With your workflow, I recommend something like a daily copy job from your phone to the server. This never deletes files. When you curate your photos via a tool on the backend remote (only do this when all photos on your phone are on the backup server), then you can do a manual sync from the backend remote to your phone to match the two exactly.

    Copy just copies or updates to newer versions from target 1 to target 2.

    Sync does the same but also deletes any files on target 2 that are not on target 1. Very easy to delete files if not careful in planning out your workflow. Test first.

    I use Backblaze B2 for my off site remote which only hides and doesn’t delete files with the default settings. You can manage those with the rclone CLI application on a desktop to later cleanup hidden files or set them to delete after X days or something in the b2 life cycle settings on the Backblaze website.

    Its no Google Photos or Dropbox, but it works well enough for me without giving up privacy. It also decouples the syncing from curating of photos giving some additional freedom for a custom workflow.

    I personally just have a daily copy job on my phone from my phone to a crypted b2 remote and a cron on my self hosted server to copy from b2 to my self hosted server. Once a year I might clean things up from my server, do a manual sync to b2, then another to my phone. Sometime later I’ll go clean up the hidden (deleted) files in b2.

    That said, I care more about backups than bidirectional syncing, so your milage may vary with this solution for your use case.