• 3 Posts
  • 48 Comments
Joined 10 months ago
cake
Cake day: December 18th, 2023

help-circle


  • “As easy as buying four same-sized disks all at once” is kinda missing the point.

    How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?

    If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?



  • I mean, yeah, I’d prefer ZFS but, unless I am missing something, it is a massive pain to add disks to an existing pool. You have to buy a new set of disks and create a new pool to transition from RAID z1 to z2. That’s basically the only reason it fails the criteria I have. I think I’d also prefer erasure encoding instead of z2, but it seems like regular scrub operations could keep it reliable.

    BTRFS sounds like it has too many footguns for me, and its raid5/6 equivalents are “not for production at this time.”

















  • This is really cool. I ended up trying something similar: serving from a ZFS pool with SeaweedFS. TBD if that’s going to work for me long term.

    I would definitely be able to manually sync the SeaweedFS files with rsync to another location but from what I see it requires me to use their software to make sense of any structure. I might be able to mount it and sync that way, hopefully performance for that is not too bad.

    Syncing like that and having more control over where the files are placed on the RAID is very cool.