I really want to run ceph because it fits a number of criteria I have: gradually adding storage, mismatched disks, fault tolerance, erasure encoding, encryption, support out-of-the-box from other software (like Incus).

But then I look at the hardware suggestions, and they seem like an up-front investment and ongoing cost to keep at least three machines evenly matched on RAM and physical storage. I also want more of a single-box NAS.

Would it be idiotic to put a ceph setup all on one machine? I could run three mons on it with separate physical device backing each so I don’t lose everything from a disk failure with those. I’m not too concerned about speed or network partitioning, this would be lukewarm storage for me.

  • jkrtn@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    “As easy as buying four same-sized disks all at once” is kinda missing the point.

    How do I migrate data from the existing z1 to the z2? And then how can I re-add the disks that were in z1 after I have moved the data? Buy yet another disk and add a z2 vdev with my now 4 disks, I guess. Unless it is possible to format and add them to the new z2?

    If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 months ago

      You don’t migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.

      If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

      This is a problem. You don’t know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there’s space. As far as I know you can’t remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you’d have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.

      Removing RAIDz vdevs might become possible in the future. There’s already a feature that allows expanding existing RAIDz vdevs but it’s fairly new so I’m personally not considering it in my expansion plans.