NFS over WireGuard is probably going to be the best when it comes to encrypted file shares without the need to set up Kerberos. Just set up the WireGuard tunnel and export over those ips.
NFS over WireGuard is probably going to be the best when it comes to encrypted file shares without the need to set up Kerberos. Just set up the WireGuard tunnel and export over those ips.
I understand. But do you see what you wrote could be seen as toxic? Intent is nice, but what and how you write really determines the tone of a community.
No need to be toxic here. You don’t need put people down. We’re all learning here together. Hey. We all are all learning more about how reverse proxies and forwarded headers work together right now, including you.
We should aim to be an open welcoming community.
You want to set the appropriate X-Forwarded-For or Forwarded headers in Nginx. The final application server being proxied (if well written) should be able to handle that.
Documentation can be found here. https://www.nginx.com/resources/wiki/start/topics/examples/forwarded/
Contrary to that other comment reverse proxies with actual IPs forwarded through them via the appropriate headers are normal and used commonly. Almost 100% so at scale.
Don’t let the wannabe elitists get you down. I personally would not host my production email server at home but self hosting is a learning journey. If you learn how email serves work along with reverse proxies you got it! That’s a win. Hack away.
I use route53 APIs and just directly update the AAAA and A records. Set a low TTL and you don’t really have to worry about any middle men services.
All you need is a simple script.
Strong recommend for Forgejo. It’s a community fork of gitea that’s actively maintained by the community and a great open source nonprofit.
It’s actually a drop in replacement for gitea if you are using that now.
Super lightweight. Super snappy, and it supports GitHub Actions style CI/CD.
I self host everything except maps and email. Maps because it’s just not there and email because even if you set it up perfectly with DKIM and everything your IP can still land on a blacklist. You will spend more time doing blacklist appeals then it’s worth.
+1 For Gitea. Works really well for me. It recently added GitHub style actors so you can use GitHub style CI/CD too!
Me too. I am really looking forward to the tiered storage system. NVME backed by HDDs backed by SMR HDDs. You write to the the NVME drives and in the background bcachefs slowly moves it to the slower mediums.
Hey. No problem. Something to keep an eye out for in the future might be bcachefs. I think it’s a step up above ZFS and btrfs. The author missed the last merge window by days but it should make it into the next kernel merge window. It’s exciting stuff. Other options might be a local GlusterFS or CephFS setup.
Fair fair. This being said ZFS isn’t in RHEL either. 🤔 Poor Red hat though. I used to work there a long time ago. I’m sad to see how they went from being THE open spice company to being worse than Oracle 🤢 when it comes to source distribution.
The crux of the matter is that the article’s criticisms of btrfs are largely based on its differences from ZFS, rather than any inherent flaws in btrfs itself. Notably, Suse Enterprise Linux, Fedora, and Meta’s Linux engineers all advocate for btrfs, using it extensively in production.
The article’s main grievances are:
Btrfs RAID Arrays:
The author is upset that btrfs RAID arrays don’t function as he anticipated. However, btrfs isn’t ZFS or mdadm; it’s its own system and should be understood as such. The author criticizes btrfs for allowing drives of mismatched sizes. This flexibility, however, isn’t inherently negative.
Btrfs RAID Array Management:
The author laments that btrfs can’t be mounted by a human-readable name like ZFS, and instead requires UUID. Using UUID is standard practice for native Linux file systems. A side note: mounting by drive letter is outdated; UUID is the recommended method.
Btrfs-RAID’s Redundancy:
The author points out that btrfs won’t auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.
Btrfs-RAID Maintenance:
The author’s complaint here boils down to “btrfs isn’t ZFS.” He attempts ZFS recovery methods on btrfs and is surprised when they don’t work. The processes are different, but that doesn’t mean btrfs is more labor-intensive.
He also critiques the use of crc32 for corruption detection. If this is a concern, other algorithms can be used. The default, crc32, is chosen for its speed. In fact, some argue that btrfs’s integrity checks are faster than alternatives.
In summary, the article’s author seems primarily upset that btrfs isn’t a ZFS clone. He overlooks btrfs’s advantages over ZFS, such as ZFS pools occasionally failing to mount due to kernel updates. On the other hand, major entities like Suse Enterprise Linux, Fedora, and Meta rely on btrfs in large-scale production environments.
When revisiting the article, keep the perspective of “an individual frustrated that btrfs isn’t ZFS” in mind. The bias becomes evident.
The only thing you need to do if you run a standard Linux distro is to set up scheduled scrubbing and smart alerts. Nas OSes do that by default. But if you set it up as a cronjob or systemd timer you can achieve the same result.
The advantage of running a Linux distro over a Nas OS is that you could add virtual machines on top via kvm or run appliances via docker. It’s just a sever with a lot of storage added on top.
As for btrfs raid. Yes. If you motherboard fails or you have to reinstall the OS you can reimport it with no prior existing knowledge. It’s simple mounting it like a normal Linux file system because it is one. The kernel will locate all members of the raid pool.
If you are using fedora (with a recent install) you are using btrfs right now. 😉
My desktop and laptop run fedora. For my servers I run Debian 12 with everything in docker.
As for that article yah. Btrfs has had some rough points in its past. It’s true. Can’t deny it. This being said I would hold that the way btrfs treats raid definitions in unconventional ways is a design advantage.
Btrfs raid1 is more like “replica 2”. If you have one 12TB drive and two 6TB drives you get 12TB of useable space. Because btrfs will work to ensure there are two replicas behind the scenes. In a traditional raid 1 you could not use the space from the mismatched drives. It’s not traditional raid1 but I think it’s preferable.
I think the main advantage for btrfs for home lab is that you can toss in any drive regardless of size and btrfs will use it. You can remove any drive and btrfs will auto rebalance. These are btrfs exclusive features. You can also change the raid type on the fly too. Once I get kernel 6.2 I could live convert my btrfs raid1 pool into a raid 5 pool while the pool is live and mounted. You can do any such live mounted conversions on the fly.
For me home lab is being flexible and working with what you can get. And I feel btrfs is a great fit for that.
That and since btrfs if a native part of the kernel you won’t ever have to worry about updating your kernel and breaking the ZFS shim or dkms.
Factually here you are wrong. Btrfs has been around for more than 10 years and is used at scale. Meta uses it at scale in their data centers, Suse Linux uses it as their default file system and uses the btrfs rollback/roll forward as part of their enterprise offerings. Fedora uses it as its default file system too.
If you prefer/know ZFS and want to avoid btrfs because of that I get it. But no need to say that btrfs is “in beta” 😂
If you know Linux, I recommend going with some form of software raid. A lot of people might recommend ZFS but I would recommend btrfs with Linux. Using btrfs you can add and removed drives of any size at will unlike ZFS. That and with btrfs you don’t need to worry about vdevs and stuff. Simple, easy to use, and simple to upgrade. Just use btrfs, set data to raid1 and metadata to raid1c3 and you will have a rock solid system. That and you won’t have to worry about dkms or kernel changes breaking your data storage. Also before someone mentions it there was a btrfs raid5 write hole but that was fixed in Kernel 6.2
Another future interesting option might be btreefs. Just got merged into kernel mainline and has amazing features.
Lastly you really want all of your drives to be connected via SATA or SAS or M.2. USB isn’t great for HDDs in any sort of raid.
+1 for WikiJS. As a bonus you can have WikiJS back itself up to plain text MarkDown files, so if things explode you can always just read those from wherever.
Another great feature I use is to have WikiJS back itself up into git. If I am going to a place with no internet access I can do a quick git pull and have a complete copy of my wiki including files on my laptop.
No problem. It should be wayyy faster than sshfs for the record. Both NFS and WireGuard are best in class tools.