deleted by creator
Just this guy, you know?
deleted by creator
If you have an Android phone I can’t recommend Genius Scan enough. Fast, accurate, lots of features. I use it with syncthing by exporting the files to a folder that’s configured to sync the paperless input folder.
Just want to say thank you! Paperless is one of the first things I recommend to anyone considering self hosting their infra. Amazing piece of work!
deleted by creator
“Huh weird, I tried to use <insert service here> and it’s not working. Welp, guess I better fix it…”
That’s a goal, but it’s hardly the only goal.
My goal is to get a synthesis of search results across multiple engines while eliminating tracking URLs and other garbage. In short it’s a better UX for me first and foremost, and self-hosting allows me to customize that experience and also own uptime/availability. Privacy (through elimination of cookies and browser fingerprinting) is just a convenient side effect.
That said, on the topic of privacy, it’s absolutely false to say that by self-hosting you get the same effect as using the engines directly. Intermediating my access to those search engines means things like cookies and fingerprinting cannot be used to link my search history to my browsing activity.
Furthermore, in my case I host SearX on a VPS that’s independent of my broadband connection which means even IP can’t be used to correlate my activity.
Honestly the issue here may be a lack of familiarity with how bare repos work? If that’s right, it could be worth experimenting with them if only to learn something new and fun, even if you never plan to use them. If anything it’s a good way to learn about git internals!
Anyway, apologies for the pissy coda at the end, I’ve deleted it as it was unnecessary. Keep on having fun!
No. It’s strictly more complexity.
Right now I have a NAS. I have to upgrade and maintain my NAS. That’s table stakes already. But that alone is sufficient to use bare git repos.
If I add Gitea or whatever, I have to maintain my NAS, and a container running some additional software, and some sort of web proxy to access it. And in a disaster recovery scenario I’m now no longer just restoring some files on disk, I have to rebuild an entire service, restore it’s config and whatever backing store it uses, etc.
Even if you don’t already have a NAS, setting up a server with some storage running SSH is already necessary before you layer in an additional service like Gitea, whereas it’s all you need to store and interact with bare git repos. Put the other way, Gitea (for example) requires me to deploy all the things I need to host bare repos plus a bunch of addition complexity. It’s a strict (and non-trivial) superset.
Absolutely. Every service you run, whether containerized or not, is software you have to upgrade, maintain, and back up. Containers don’t magically alleviate the need for basic software/service maintenance.
Agreed, which is why you’ll find in a subsequent comment I allow for the fact that in a multi-user scenario, a support service on top of Git makes real sense.
Given this post is joking about being ashamed of their code, I can only surmise that, like I’m betting most self-hosters, they’re not dealing with a multi-user use case.
Well, that or they want to limit their shame to their close friends and/or colleagues…
This post is about “self-hosting” a service, not using GitHub. That’s what I’m responding to.
I’m not saying GitHub isn’t valuable. I use it myself. And in any situation involving multiple collaborators I’d probably recommend that kind of tool–whether GitHub or some self-hosted option–for ease of user administration, familiar PR workflows, issue tracking, etc.
But if you’re a solo developer storing your code locally with no intention to share or collaborate, and you don’t want to use GitHub (as, again, is the case with this post) a self-hosted service adds a ton of complexity for only incremental value.
I suspect a ton of folks simply don’t realize that you don’t need anything more than ssh and git to push/pull remote git repositories because they largely cargo cult their way through source control.
The idea of “self-hosting” git is so incredibly weird to me. Somehow GitHub managed to convince everyone that Git requires some kind of backend service. Meanwhile, I just push private code to bare repositories on my NAS via SSH.
Honestly, for personal use I just switched to straight Markdown that I edit with Vim (w/ Vimwiki plugin) or Markor on Android and synchronize with Syncthing. Simple, low effort, portable, does enough of what I need to get the job done.
And if I wanna publish a read-only copy online I can always use an SSG.
It has the benefit that the container can’t start before the mount point is up without any additional scripts or kludges, so no race conditions or surprise behaviour. Using fstab alone can’t provide that guarantee. The other option is Autofs but it’s messier to configure and may not ship out of the box on modern distros.
Assuming systemd, create a file like
/etc/systemd/system/dir-to-mount.mount
And then configure it per the systemd docs:
https://www.freedesktop.org/software/systemd/man/latest/systemd.mount.html
Then modify the docker unit file to have a dependency on the mount unit so it’s guaranteed to be up before docker starts.
Frankly, I’d rather pay a motivated and focused developer if the product is good. And Symfonium is fantastic.
My vote: not if you can avoid it.
For casual home admins docker containers are mysterious black boxes that are difficult to configure and even worse to inspect and debug.
I prefer lightweight VMs hosting one or more services on an OS I understand and control (in my case Debian stable), and only use docker images as a way to quickly try out something new before commiting time to deploying it properly.
I wonder how long it’ll take before we finally collectively reject the SV ethos that size is the only metric that matters and success is only achieved via monopoly…
There was a time when Usenet and BBBses and IRC was tiny and yet people still found value through community in those places.
Maybe, and I know this is a wild idea, platforms don’t have to include every human on the planet to be meaningful, relevant, or valuable.
Not if you use a Hurricane Electric tunnel for ipv6 transit. My ISP hands out V6 addresses and I still use HE so I get a stable, globally routable /48 that moves with me (I had to switch ISPs recently and I just had to update my tunnel and everything just worked).
Take it to an electronics recycling center. Seriously.
If you already have a homelab, you plan to replace it, you don’t want to repair it, and you don’t have an obvious use case for another machine (it’s just another computer; you either have the need for another computer or you don’t), then holding onto it is just hoarding.