This was really neat, kinda boils down to “you don’t want to deal with the complexity and it’s horrifically slow.”
This was really neat, kinda boils down to “you don’t want to deal with the complexity and it’s horrifically slow.”
“As easy as buying four same-sized disks all at once” is kinda missing the point.
How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?
If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?
Neat! Thank you
I mean, yeah, I’d prefer ZFS but, unless I am missing something, it is a massive pain to add disks to an existing pool. You have to buy a new set of disks and create a new pool to transition from RAID z1 to z2. That’s basically the only reason it fails the criteria I have. I think I’d also prefer erasure encoding instead of z2, but it seems like regular scrub operations could keep it reliable.
BTRFS sounds like it has too many footguns for me, and its raid5/6 equivalents are “not for production at this time.”
They will do power conditioning? My modem is such a sensitive baby I cannot plug anything else in next to it or it starts dropping packets. Would a UPS help with that? Unfortunately I cannot replace the modem, that’s the only one the ISP will give me.
This is great, thank you! My next drive is going to be fast and durable.
I thought you meant 1 TB as a sort of peak performer (better than 2+ TB) in this area. From the description, it’s more like 1 TB is kinda the minimum durability you want with a drive, but larger drives are better?
Why does 1TB help with the wear leveling?
What I was looking at was the All in One, yes. I didn’t realize there was a separate maintained image, thank you! I’d much rather have a single image without access to the socket at all, I’ll give that a shot sometime.
I was looking into setting up Nextcloud recently and the default directions suggest exposing the socket. That’s crazy. I checked again just now. I see it is still possible to set it up without socket access, but that set of instructions isn’t as prominent.
I linked to Docker in specific because if Nextcloud has access to the socket, and hackers find some automated exploit, they could easily escalate out of the Docker container. It sounds like you have it more correctly isolated.
I cannot get the app to connect to my HA with the current setup. I have Cloudflare doing email verification, and the app doesn’t understand how to collect the cookies to make that possible.
Doesn’t Nextcloud running in Docker want the socket exposed?
I googled around for an example https://book.hacktricks.xyz/linux-hardening/privilege-escalation/docker-security/docker-breakout-privilege-escalation.
Ignore me if you’ve already hardened the containers.
Yeah, same, except I tunneled HA out via that Cloudflare daemon. Kinda janky because I cannot use the app with it to do locations, but I can check in on the pets from anywhere.
I’m planning to set up a legit VPN sometime soon.
That’s amazing. I would love to see the algorithm for that. Hopefully I’ll find a nice explainer if I search around.
There’s a tale from long ago where someone set up a CD drive tray so that opening it would tap the reset button on a server.
Guardrails are absolutely not a reason why people prefer the CLI. We want the guardrails off so we can go faster.
Yes, I do see that. I’m definitely getting answers to a question I didn’t intend. I was hoping for more of an rsync but that something which also provides viewing and incremental backups to an offsite. I don’t know how to phrase that, and perhaps for what I want it makes more sense to have rsync/rclone to copy files around and something else to view.
How was it setting up and running Nextcloud? I’m very curious about their office software, looks fun.
This is really cool. I ended up trying something similar: serving from a ZFS pool with SeaweedFS. TBD if that’s going to work for me long term.
I would definitely be able to manually sync the SeaweedFS files with rsync to another location but from what I see it requires me to use their software to make sense of any structure. I might be able to mount it and sync that way, hopefully performance for that is not too bad.
Syncing like that and having more control over where the files are placed on the RAID is very cool.
Oh, neat, I’ll have to look into that more. It’s able to have some redundancy and does some sort of rebalancing on disk failures?