Restic has a feature where you can copy snapshots from one repo to another, if that’s what you mean
Developer by day, gamer by night!
🖥️ Stack: #NodeJS #Flutter #Go 🐧Linux: Currently on #Fedora 🎮️ Games: #ApexLegends and #Chess
Fun fact: Built my own custom keyboard, which sometimes doesn’t work and hangs, but hey… it still adds to the charm, right 😂
Restic has a feature where you can copy snapshots from one repo to another, if that’s what you mean
I saw some github issue where someone reported the same.
Trigkey S5 with AMD 5700U and 24GB ram. As I said, it is still slow, even after indexing has finished.
I’d really love to use it. Tested Photoprism, but the mobile app lacks almost everything immich offers.
you should really take some time and learn it. It’s a godsend.
It is great, but the mobile app becomes slow AF when I import all my google photos which are thousands of them. Even after indexing has finished.
Edit: Scratch what I said! Just gave Immich another shot, and the slow mobile app was due to the initial background sync running.
Not familiar with owncloud.
But can’t you set something like “http://127.0.0.1” as domain?
I don’t know of any beginner tutorial, since I learned it along the way.
But in a nutshell. Most webservers (reverse proxies) are manual. nginx, caddy, traefik. However, there’s nginx proxy manager, which is a web gui.
Regarding DNS, you need DNS regardless of fixed IP what you probably mean is dynDNS (dynamic dns) which you’ll definitely need if your IP changes.
Wher I live they are rare too. They used to be more common back in the days, but now they’re mostly offered to business customers.
But you’re right… the “hopefully” could’ve been easily misinterpreted as in “hoping the IP doesn’t change anytime soon, or ever”
By hopefully… I actually meant that OP might have a static IP already.
To add to that… If OP owns a domain, they could issue an SSL cert for a subsain, like lab.example.com and point the A record to the (hopefully static) IP if the router, and port forward 443 to pihole
Most of the docker services use mounted folders/files, which I usually store in the users home folder /home/username/Docker/servicename.
Now, my personal habit of choice is to have user folders on a separate drive and mount them into /home/username. Additionally, one can also mount /var/lib/docker this way. I also spin up all of these services with portainer. The benefit is, if the system breaks, I don’t care that much, since everything is on a separate drive. In case of needing to re-setup everything again, I just spin up portainer again which does the rest.
However, this is not a backup, which should be done separately in one way or the other. But it’s for sure safer than putting all the trust into one drive/sdcard etc.
Regarding the SMB-share, let my try to clarify.
Let’s say you have 3 machines. 192.168.1.10/20/30. On machine 10 a folder synology
which has a network folder mounted onto it from machine 20 mount -t nfs 192.168.1.20:/some/folder synology
.
Now you want to access that folder on machine 30. Here you can’t use mount -t nfs
but MUST use mount -t cifs
instead, because you cannot forward a mounted share. However, this is not the problem, it’s just a description of my current setup.
Regarding the ownership. Your point is very valid, but I ruled that out already. I did a so-called bind-mount within Synology with the exact user permissions as in the users home folder, but this didn’t work.
FYI: a bind-mount is where you have two folders /foo
(with many sub-folders and files) and /bar (empty)
. If you do mount --bind /foo /bar
, then the system thinks that bar is a real folder with the subfolders and files (from foo, including their permissions).
you’re right… I’m already evaluating it now.
Just came across xpenology and surprisingly I managed to set it up without a hassle. I think this is my final solution I’ll be going with.
The thing that amazed me the most. I could import my GoogleDrive files and it converted them so they work in synology office… Mindblowing!
I tried that… although I had some issues setting it up.
What’s funny, I thought this would be the hacky solution, while xpenology being the real deal.
Sorry, due to a typo, there’s some confusion. I do run proxmox bare-metal, with 2 VMs: Fedora, xpenology.
Maybe my comment might be a bit misunderstanding, but I do run my Fedora VM within proxmox too. So, I have two VMs, Fedora and Xpenology.
I was just wondering whether the additional VM makes even sense to have it running alongside XP.
I’d really prefer to avoid NextCloud, since it was very slow, in comparison to Synology, on the same hardware.
But thx for the hint abaout casaos. Didn’t know about that. Will definitely have a look at it.
I’d highly recommend to take a deeper look into Docker. While it might look complicated at first, it really isn’t. Once you get the gist of it, you’r setup life will me much simpler in the future.
In a nutshell: Say you need to run jellyfin (or whatever)
Generally, you’d need to install jellyfin from the repos or download it’s binary, etc… Then you’d have to dig through the configuration process, where files are scattered all across the system. Probably, in some cases, you’d have to copy/move/symlink media files around, etc.
With Docker however, you just spin up the jellyfin as a container, and bind the necessery configuration and media files to that container, which is usually a one-liner.
So instead of having scattered config files all around the place, you can have something like ~/Docker/configs/jellyfinn and bind that folder (or file) to the containers /etc/jellyfin. And you can use the same approach to have your media files in ~/Movies and bind thst to jellyfin /data folder. These are just examples, you’ll just have to look where the docker containers expect the files to be, which is usually well documented.
And the final step is to bind the ports of the container to the host, so you can interact with the service as if it was running on the host.
Actually… I think I like fastmail
Thank you, still working on my GH profile and donation page.