Home Assistant
Home Assistant
I think you basically need to do this. I see it mentions the wifi interface becoming WAN too.
Do you even need relayd? I think relayd is for extending the existing NAT, i.e. a wireless bridge operation. At least that’s how I utilized it in a previous setup. If you want to have your own NAT, I think it’s enough to just connect to the upstream wireless network as a client. Not sure if you have to designate the wireless interface as WAN or not.
Can’t say. Personally, I’m running vanilla Ubuntu LTS and rolling my own ZFS, NFS, containers, desktop and so on but “I know what I’m doing.” I hardly see a reason to do TrueNAS outside of UI. With that said I would highly recommend to ensure your data sits on ZFS because it protects it from silent data corruption. If I had to choose between Proxmox and TrueNAS and one ensured my data is on ZFS, I’d choose that solution, and then think about other use cases.
Their use cases are a bit different, no? Proxmox is a general hypervisor. You can run whatever you want on it. NAS is one workload that could be run on top of Proxmox. TrueNAS is a NAS first solution, hypervisor second. And that’s the overlap with Proxmox. You could think of your core use case:
I love how they literally ripped off Google Photos’ interface, including using the same Material icons. I could navigate it via muscle memory. 😅
Get out of the anti container mindset. Getting started with docker takes half an hour. You need to learn 3-4 commands to use other people’s services. Everything is easier than RPMs afterwards.
What are you talking about… Containers make it way easier to setup and operate services, especially multicomponent services like Immich. I just tried Immich and it took me several minutes to get it running. If I wanted to give it permanent storage, I’d have to spend several more to make a directory then add a line in a file and restart it. I’ve been setting up services before Linux containers became a thing and after. I’d never go back to the pre-container times if I have the choice.
You don’t migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.
If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?
This is a problem. You don’t know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there’s space. As far as I know you can’t remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you’d have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.
Removing RAIDz vdevs might become possible in the future. There’s already a feature that allows expanding existing RAIDz vdevs but it’s fairly new so I’m personally not considering it in my expansion plans.
What you lose in space, you gain in redundancy. As long as you’re not looking for the absolute least redundant setup, it’s not a bad tradeoff. Typically running a large stripe array with a single redundancy disk isn’t a great idea. And if you’re running mirrors anyway, you don’t lose any additional space to redundancy.
Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don’t create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don’t even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you’d get similar redundancy to moving from RAIDz1 to RAIDz2.
Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.
A wiki sounds like the right thing since you want to be able to see the current and previous versions of things. It’s a bit easier to edit than straight Markdown in git, which is the other option I’d do. Ticketing systems like OpenProject are more useful for tracking many different pieces of work simultaneously, including future work. The process of changing your current networking setup from A to B would be tracked in OpenProject. New equipment to buy, cabling to do, software to install, descibing it in your wiki, and the progress on each of those. Your wiki would be in state A before you begin this ticket. Once you finish it, your wiki will be in state B. While in progress, the wiki would be somewhere between A and B. You could of course use just the wiki but it’s nice to have a place where you can keep track of all the other things including being able to leave comments that provide context which allows you to resume at a later point in time. At several workplaces the standard setup that always gets entrenched is a ticketing system, a wiki and a version control. Version is only needed for tasks that include code. So the absolute core are the other two. If I had to reduce to a single solution, I’d choose a wiki since I could use separate wiki pages to track my progress as I go from A to B.
I had a 2-disk mirror hooked to the USB 3 ports. I think it did >200MB/s per disk prior to mirroring and the mirror speeds were similar. It only really started dragging itself when I put disk encryption on top. I think it used to do 80-90MB/s. Exposed it via NFS and it ran it as NAS for an active Plex server for a couple of years. The Pi 4 is still alive, now on another duty. 🫠
For an SBC, yes. I don’t think anyone’s come close to its software support. I’m using quite a few in different applications, some 24/7. I’ve yet to experience hardware or software failure. I’m using official/quality PSUs and SanDisk Extreme Pro/ Samsung Evo Plus SD cards.
Docker has native compute performance. The processes essentially run on the host kernel with a different set of libs. The only notable overhead is in storing and loading those libs which takes a bit more disk and RAM. This will be true for any container solution and VMs. VMs have a lot of additional overhead. An a cursory glance, Incus seems to provide an interface to run Linux containers or VMs. I wouldn’t expect performance differences between containers run through it compared to Docker.
Seems like the 3v3 regulator is what goes out on these
Wow, they’ve really reached the bone on cost saving with this one to have a fucking voltage regulator be the straw that broke the camel’s back.
Can’t be Gamble since it’s trying to reduce losses, not incur them. 🤭
I’ve only just bootstrapped it once for testing. I used the docker setup and it was trivial.
It’s why I only buy their ZigBee/Z-Wave devices. Safer than any WiFi-connected alternative.