In the immortal words of Jake the Dog:
Dude, suckin’ at something is the first step to being sorta good at something.
We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!
In the immortal words of Jake the Dog:
Dude, suckin’ at something is the first step to being sorta good at something.
We are or were all noobs once. Going away from the keyboard is often an undervalued step in the solution-finding process. Kudos!
Given the very specific dependencies that Immich has wrt. the Postgres plugins it needs, I’m certain that it’s not currently packaged as an RPM and I would even bet that it never will be (at least not as one of the officially supported packages put out by the developers).
Can confirm the statistics: I recently consolidated about a dozen old hard disks of various ages and quite a few of them had a couple of back blocks and 2 actually failed. One disk was especially noteworthy in that it was still fast, error-less and without complaints. That one was a Seagate ST3000DM001. A model so notoriously bad that it’s got its own Wikipedia entry: https://en.wikipedia.org/wiki/ST3000DM001
Other “better” HDDs were entirely unresponsive.
Statistics only really matter if you have many, many samples. Most people (even enthusiasts with a homelab) won’t be buying hundreds of HDDs in their life.
Was about to post this, this works well for me.
In my case I’m storing the DB on my Google Drive for now, but Keepass2Android supports many different systems, including “generic” things like WebDAV, so really anything should work.
While Keepass2Android is integrated with the syncing and will always check for conflicts (i.e. check for latest version before saving), the same isn’t necessarily true for the desktop client. But since I rarely edit from both devices at the same time, anything that syncs to the Desktop in a somewhat realtime fashion should work just fine.
And for the few (long ago) cases where updates were overwritten, the “previous version” feature of Google Drive was god-sent! (And KeepassX can simply merge the old overwritten version into the current one and you’ll get the correct merge).
I think the difference is at what level:
I feel OPs critique has some truth to it. I personally would rather stay with raidz by zfs, exactly because of it’s open nature (yes, they too have bugs, nothing is perfect).
Do you have any devices on your local network where the firmware hasn’t been updated in the last 12 month? The answer to that is surprisingly frequently yes, because “smart device” companies are laughably bad about device security. My intercom runs some ancient Linux kernel, my frigging washing machine could be connected to WiFi and the box that controls my roller shutters hasn’t gotten an update sind 2018.
Not everyone has those and one could isolate those in VLANs and use other measures, but in this day and age “my local home network is 100% secure” is far from a safe assumption.
Heck, even your router might be vulnerable…
Adding HTTPS is just another layer in your defense in depth. How many layers you are willing to put up with is up to you, but it’s definitely not overkill.
They are in fact the same image, as you can verify by comparing their digest:
$ docker pull ghcr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for ghcr.io/linuxserver/plex:latest
ghcr.io/linuxserver/plex:latest
$ docker pull lscr.io/linuxserver/plex
Using default tag: latest
latest: Pulling from linuxserver/plex
Digest: sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
Status: Image is up to date for lscr.io/linuxserver/plex:latest
lscr.io/linuxserver/plex:latest
$
See how both images have the digest sha256:476c057d677ff239d6b0b5c8e7efb2d572a705f69f9860bbe4221d5bbfdf2144
. Since the digest uniquely identifies the exact content/image, that guarantees that those images are in fact byte-for-byte identical.
As others have mentioned (and also explained in quite some detail) you’re trying to bite off a lot at once. First, for Jellyfin locally you can ignore most of that.
And if you really want to learn the ins and outs of all that (and I can recommend it, it’s useful), then I suggest you start with some simple web app. Something like note taking or maybe even something trivial like a whoami service, which basically just echos some information it was sent back to you. That’s super useful because you know that it is unlikely to be broken, so you can focus on the networking/port forwarding issues. And once you’ve got that working and have a rough feeling how this all works you can go on to more complex setups that actually do something useful.
Those are usually the prefixes for interfaces which are not quite the same thing as networks. An interface is the surface that connects some device to a network. For example if your router treats its WLAN and its wired network as a single network (i.e. each thing on WLAN can see everything on wired and vice versa) then a specific device might still have a wlan1 and eth1 interface, each one reaching the respective physical network device, while being in the same network.
“One network” here really only means “something can successfully route between all the devices”.
The EULA is just standard terms like don’t try to circumvent the license requirement, if you buy a license don’t share it with other people, some warranty and liability stuff, etc.
Yes, I know. I actually read it (which is rare) and it’s mostly sensible stuff. The “no reverse engineering” clause just felt weird in something that claims to be “mostly open source”.
In the end I find it slightly misleading to call this open-core when the app with just the non-commercial features can’t be built full from the published source.
They are not necessary for basic core functionality but it doesn’t work without it as the license requirement could be disabled easily then as I mentioned before.
I don’t quite understand this argument. If I can build a development version I can run any and all code in the repo (while providing an existing xpipe installation) and somehow I would be able to ship this, if I had criminal energies, so how exactly does this requirement prevent that?
In other words: if the only way to access the commercial features without a license is by doing something illegal then … that’s not really adding much burden, is it?
In the end I’m probably just one of the open-source proponents that don’t like that, and that’s fine. Not everyone needs to agree with everyone, there’s a lot of space here where reasonable minds can disagree. I just think that claiming “the main application is open source” when it can’t be built purely from the source is a bit misleading.
This looks really interesting.
I don’t mind the commercialization at all and think it’s actually a good sign for an open source project to have a monetization strategy to be able to hang around.
But why do I have to agree to a EULA on a Apache-licensed piece of software? I understand that for the commercial features that might be necessary, but in that case could we get a separate installer for “this is all Apache licensed, no need for a EULA”?
Additionally the contribution file mentions that “some components are only included in the release version and not in this repository.”. What are these components? Are they necessary for the basic core functionality?
The issue is that according to the spec the two DNS servers provided by DHCP are equivalent. While most clients favor the first one as the default, that’s not universally the case and when and how it switches to the secondary can vary by client (and effectively appear random). So you won’t be able to know for sure which client uses your DNS, especially after your DNS server was unreachable for a while for whatever reason. Personally I’ve “just” gotten a second Pi to run redundant copies of PiHole, but only having a single DNS server is usually fine as well.
Hint: you don’t need to route all your traffic through your VPN to make use of the pihole adblocking: Just DNS. If your at home internet is even moderately stable/good then this should barely affect your roaming internet experience, since DNS traffic is such a small part of all traffic.
Also, since I’m already mirroring the configuration of my PiHole instance to a secondary one, I’m considering putting a tertiary one on some forever-free cloud server instance and just using that when not at home (put it into the same wireguard vpn to prevent security nightmares). That way my roaming private DNS wouldn’t even depend on my home internet.
Sidnenote about the PI filesystem self-clobbering: Are you running off of an SD card? Running off an external SSD is way more reliable in my experience. Even a decent USB stick tends to be better than micro-SD in the long run, but even the cheapest external SSD blows both of them out of the water. Since I switched my PIs over to that, they’ve never had any disk-related issues.
IMO set up a good incremental backup system with deduplication and then back up everything at least once a day as a baseline. Anything that’s especially valuable can be backed up more frequently, but the price/effort of backing up at least once a day should become trivial if everything is set up correctly.
If you feel like hourly snapshots would be worth it, but too resource-intensive, then maybe replacing them with local snapshots of the file system (which are basically free, if your OS/filesystem supports them) might be reasonable. Those obviously don’t protect against hardware failure, but help against accidental deletion.
What you describe is true for many file formats, but for most lossy compression systems the “standard” basically only strictly explains how to decode the data and any encoder that produces output that successfully decodes that way is fine.
And the standard defines a collection of “tools” that the encoders can use and how exactly to use, combine and tweak those tools is up to the encoder.
And over time new/better combinations of these tools are found for specific scenarios. That’s how different encoders of the same codec can produce very different output.
As a simple example, almost all video codecs by default describe each frame relative to the previous one (I.e. it describes which parts moved and what new content appeared). There is of course also the option to send a completely new frame, which usually takes up more space. But when one scene cuts to another, then sending a new frame can be much better. A “bad” codec might not have “new scene” detection and still try to “explain the difference” to the previous scene, which can easily take up more space than just sending the entire new frame.
Note that there is some reliability drawback of spinning hard disks on and off repeatedly. maybe unintuitively HDDs that spin constantly can live much longer than those that spend 90% of their time spun down.
This might not be relevant if you use only SSDs, and might never affect you, but it should be mentioned.
This feels like a XY problem. To be able to provide a useful answer to you, we’d need to know what exactly you’re trying to achieve. What goal are you trying to achieve with the VPN and what goal are you trying to achieve by using the client IP?
Note that just because everything is digital doesn’t mean something like that isn’t necessary: If you depend on your service provider to keep all of your records then you will be out of luck once they … stop liking you, go out of business, have a technical malfunction, decide they no longer want to keep any records older than X years, …
So even in a all-digital world I’d still keep all the PDF artifacts in something like that.
And I also second the suggestion of paperless-ngx (even though I’m not using it for very long yet, but it’s working great so far).
You’ve got a single, old HDD attached via USB. There’s plenty of places that could be the bottleneck here, but that’s among the first I’d check. Can you actually read from that HDD significantly faster than your network transfer speed? Check that locally first. No use in optimizing anything network-related when your underlying disk IO is slow.