• 5 Posts
  • 130 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle




  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldProxmox vs. TrueNAS Scale
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 months ago

    Can’t say. Personally, I’m running vanilla Ubuntu LTS and rolling my own ZFS, NFS, containers, desktop and so on but “I know what I’m doing.” I hardly see a reason to do TrueNAS outside of UI. With that said I would highly recommend to ensure your data sits on ZFS because it protects it from silent data corruption. If I had to choose between Proxmox and TrueNAS and one ensured my data is on ZFS, I’d choose that solution, and then think about other use cases.


  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldProxmox vs. TrueNAS Scale
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    4 months ago

    Their use cases are a bit different, no? Proxmox is a general hypervisor. You can run whatever you want on it. NAS is one workload that could be run on top of Proxmox. TrueNAS is a NAS first solution, hypervisor second. And that’s the overlap with Proxmox. You could think of your core use case:

    • Do you want mainly a NAS that can run a few services too?
      • Yes
        • Perhaps use TrueNAS
      • No
        • Perhaps use Proxmox and roll your own NAS on top



  • What are you talking about… Containers make it way easier to setup and operate services, especially multicomponent services like Immich. I just tried Immich and it took me several minutes to get it running. If I wanted to give it permanent storage, I’d have to spend several more to make a directory then add a line in a file and restart it. I’ve been setting up services before Linux containers became a thing and after. I’d never go back to the pre-container times if I have the choice.


  • You don’t migrate the data from the existing z1. It keeps running and stays in use. You add another z1 or z2 to the pool.

    If the vdevs are not all the same redundancy level am I right that there’s no guarantee which level of redundancy any particular file is getting?

    This is a problem. You don’t know which file ends up on which vdev. If you only use mirror vdevs then you could remove vdevs you no longer want to use and ZFS will transfer the data from them to the remaining vdevs, assuming there’s space. As far as I know you can’t remove vdevs from pools that have RAIDz vdevs, you can only add vdevs. So if you want to have guaranteed 2-drive failure for every file, then yes, you’d have to create a new pool with RAIDz2, move data to it. Then you could add your existing drives to it in another RAIDz2 vdev.

    Removing RAIDz vdevs might become possible in the future. There’s already a feature that allows expanding existing RAIDz vdevs but it’s fairly new so I’m personally not considering it in my expansion plans.



  • Adding new disks to an existing ZFS pool is as easy as figuring out what new redundancy scheme you want, then adding them with that scheme to the pool. E.g. you have an existing pool with a RAIDz1 vdev with 3 4TB disks. You found some cheap recertified disks and want to expand with more redundancy to mitigate the risk. You buy 4 16TB disks, create a RAIDz2 vdev and add that to the existing pool. The pool grows in storage by whatever is the space available from the new vdev. Critically pools are JBODs of vdevs. You can add any number or type of vdevs to a pool. The redundancy is done at the vdev level. Thus you can have a pool with a mix of any RAIDzN and/or mirrors. You don’t create a new pool and transition to it. You add another vdev with whatever redundancy topology you want to the existing pool and keep writing data to it. You don’t even have to offline it. If you add a second RAIDz1 to an existing RAIDz1, you’d get similar redundancy to moving from RAIDz1 to RAIDz2.

    Finally if you have some even stranger hardware lying around, you can combine it in appropriately sized volumes via LVM and give that to ZFS, as someone already suggested. I used to have a mirror with one real 8TB disk and one 8TB LVM volume consisting of 1TB, 3TB and 4TB disk. Worked like a charm.


  • Avid Amoeba@lemmy.catoSelfhosted@lemmy.worldChange tracking ideas
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 months ago

    A wiki sounds like the right thing since you want to be able to see the current and previous versions of things. It’s a bit easier to edit than straight Markdown in git, which is the other option I’d do. Ticketing systems like OpenProject are more useful for tracking many different pieces of work simultaneously, including future work. The process of changing your current networking setup from A to B would be tracked in OpenProject. New equipment to buy, cabling to do, software to install, descibing it in your wiki, and the progress on each of those. Your wiki would be in state A before you begin this ticket. Once you finish it, your wiki will be in state B. While in progress, the wiki would be somewhere between A and B. You could of course use just the wiki but it’s nice to have a place where you can keep track of all the other things including being able to leave comments that provide context which allows you to resume at a later point in time. At several workplaces the standard setup that always gets entrenched is a ticketing system, a wiki and a version control. Version is only needed for tasks that include code. So the absolute core are the other two. If I had to reduce to a single solution, I’d choose a wiki since I could use separate wiki pages to track my progress as I go from A to B.


  • I had a 2-disk mirror hooked to the USB 3 ports. I think it did >200MB/s per disk prior to mirroring and the mirror speeds were similar. It only really started dragging itself when I put disk encryption on top. I think it used to do 80-90MB/s. Exposed it via NFS and it ran it as NAS for an active Plex server for a couple of years. The Pi 4 is still alive, now on another duty. 🫠



  • Docker has native compute performance. The processes essentially run on the host kernel with a different set of libs. The only notable overhead is in storing and loading those libs which takes a bit more disk and RAM. This will be true for any container solution and VMs. VMs have a lot of additional overhead. An a cursory glance, Incus seems to provide an interface to run Linux containers or VMs. I wouldn’t expect performance differences between containers run through it compared to Docker.