• 6 Posts
  • 260 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • lemmyvore@feddit.nltoSelfhosted@lemmy.worldBackup solutions
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 months ago

    I second this. But keep in mind the difference between a sync tool like rsync, syncthing etc. and a dedicated backup tool like borg.

    A sync tool is basically a fancy copy. It copies what is there now. It’s a bit smarter than a copy in that it can avoid copying unmodified files, can optionally delete files that are no longer there, and has include/exclude patterns.

    But a sync tool doesn’t preserve older versions, and doesn’t do deduplication, compression, encryption and so on. A backup tool does.

    Both can be useful, as long as you use them properly. For example I gave my dad a Syncthing dir on his laptop that syncs whatever happens in that dir, over Tailscale, to my NAS. But the dir on the NAS gets backed up with Borg once a day.

    The Syncthing protects against problems like the laptop dies, gets dropped, gets stolen etc. The Borg backup protects against deleted and modified files. Neither of them is of any use if the user didn’t put something in the dir to begin with.


  • Yeah it’s not exactly an obvious feature. I don’t even remember how I stumbled onto it, I think I was looking at the /data dirs and noticed the default one.

    I haven’t tried using it for more than one site but I think that if you add multiple domain names to the same proxy host they go to the same server instance and you might be able to tweak the “Advanced” config to serve all of them as virtual hosts.

    It’s not necessarily a bad thing to have a separate nginx host. For example I have a PHP app that has its own nginx container because I want to keep all the containers for it in one place and not mix it up with NPM.



  • Absolutely, but it has a built-in webserver that can serve static files, too (I constantly use that in my dev environment).

    How about Python? You can get an HTTP server going with just python3 -m http.server from the dir where the files are. Worth remembering because Python is super common and probably already installed in many places (be it on host or in containers).


  • I see from your other comments that you’re already running nginx in other containers. The simplest solution would be to make use of one of them. Zero overhead since you’re not adding any new container. 🙂

    You mentioned you’re using NPM, well NPM already has a built-in nginx host that you can reach by making a proxy host pointed at http://127.0.0.1:80 and adding the following to the “Advanced” tab:

    location / {
      root /data/nginx/local_static;
      index index.html;
    }
    

    Replace the root location with whatever dir you want, use a volume option on the NPM container to map the dir to the host, put your files in there and that’s it.



  • lemmyvore@feddit.nltoSelfhosted@lemmy.worldSelf Hosting Fail
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    4 months ago

    IMHO you’re optimizing for the wrong thing. 100% availability is not something that’s attainable for a self-hoster without driving yourself crazy.

    Like the other comment suggested, I’d rather invest time into having machines and services come back up smoothly after reboots.

    That being said, an UPS may be relevant to your setup in other ways. For example it can allow a parity RAID array to shut down cleanly and reduce the risk of write holes. But that’s just one example, and an UPS is just one solution for that (others being ZFS, or non-parity RAID, or SAS/SATA controller cards with built-in battery and/or hardware RAID support etc.)



  • Their CDN has two tiers, a super-cheap one (0.005/GB) with only 10 nodes and a more expensive one (0.01/GB) with 100+ nodes. The CDN and the storage services are distinct. The storage service is priced per quantity of data stored and replication zones, the CDN is priced at data served and geo-redundancy. You use FTP to manage the storage, not an API. A CDN can pull from a storage, or from a live website. Each CDN gets a b-cdn.net subdomain and you can either CNAME your own [sub]domain(s) to it or link it strictly for your static assets.

    You load money in your account (minimum of $10 per load) and at the end of every month they take how much you’ve consumed (minimum of $1 per month).

    In my case I only have a few hundred MB in total so I generate the websites locally and upload the static snapshots to their storage and serve from there with the main website domain CNAME’d to the CDN domain. But they have tutorials for acting as a static cache for WordPress for example or other CMS.

    The CDN’s have lots of useful settings like redirect/block rules, you can assign a free SSL cert, can do CORS headers, hotlink protection, custom error pages, control the cache timeouts, concurrent requests, apply all kinds of limits, you can white/blacklist countries, control regional routing and so on and so forth.



  • Another neat trick is to generate your website on your own PC and only publish the static version.

    You can publish static files to a CDN service, which costs very little compared to traditional hosting that includes a dynamic language and database.

    A CDN is also usually distributed across the world and has cool features like built-in scalability and redundancy which means very little chance of outages, can deal with traffic spikes, and fast response no matter where the visitors are from.


  • Is there any particular reason you’re interested in using ZFS?

    How do you intend to move over the data on the 2x6 array if you create a new pool with all the drives?

    mdadm RAID1 is easy to handle, fairly safe from write holes and easy to upgrade.

    If it were me I’d upgrade to a 2x12 array (mdadm RAID1 or ZFS mirror, whichever you want), stored internally. And use the old 6 TB drives as cold storage external backups with Borg Backup. Not necessarily for the media files but you must have some important data that your don’t want to lose (passwords, 2FA codes, emails, phone photos etc.)

    I wouldn’t trust USB-connected arrays much. Most USB enclosures and adapters aren’t designed for 24/7 connectivity, and arrays (especially ZFS) are sensitive to the slightest error. Mixing USB drives with any ZFS pool is a recipe for headache IMO.

    I could accept using the 2x6 as a RAID1 or mirror by themselves but that’s it. Don’t mix them with the internal drives.

    Not that there’s much you could do with that drive setup, since the sizes are mismatched. You could try unraid or snapraid+mergerfs which can do parity with mismatched drives but it’s meh.

    Oh and never use RAID0 as the bottom layer of anything, when a drive breaks you lose everything.





  • Get Nvidia GPU for AI, period.

    Read the manual for the motherboard you want and make sure that the M2 slot supports NVMe rather than SATA. (Also, learn to tell NVMe from SATA chips.) M2 slots that are SATA usually share a SATA lane with the SATA connectors and if you populate the M2 slot you might lose a connector.

    Another thing to read about is whether populating which M2 slot reduces the speed of one of the PCIe slots. Same reason (shared lanes) but with PCIe instead of SATA. These things should be spelled out next to the M2 connectors.

    NVMe drives in Linux have /dev/nvme* designations not /dev/sd*.


  • Unraid uses a very simplistic scheme where you throw together a bunch of drives and one parity drive. The parity drive needs to be as big as the largest normal drive and holds recovery checksums for all the other.

    Basically you can lose any one disk and still be able to recover the data, and the disks don’t have to be the same size.

    You can achieve the same with snapraid + mergerfs if you want.

    TrueNAS uses distributed parity schemes so it has same size requirements, but it also protects against bitrot and has other extra features.