I have one of these with PFSense on it. Works great, but when I had it in a hot room I had to zip tie a 120mm fan to it 😀
I have one of these with PFSense on it. Works great, but when I had it in a hot room I had to zip tie a 120mm fan to it 😀
Out of curiosity, what are the primary differences from Immich? I’ll be starting down this path soon.
Good.
By data do you mean image?
If so, make sure you run a docker compose build
after pulling the repo, and also make sure you stop and remove the container. (docker stop [container]; docker rm [container]
) if not using docker compose down
. If you don’t remove the container, it will keep using the old image.
Good suggestions at the bottom.
There are several indications which could be used to discover the attack from day 1:
All issued SSL/TLS certificates are subject to certificate transparency. It is worth configuring certificate transparency monitoring, such as Cert Spotter (source on github), which will notify you by email of new certificates issued for your domain names
Limit validation methods and set exact account identifier which could issue new certificates with Certification Authority Authorization (CAA) Record Extensions for Account URI and Automatic Certificate Management Environment (ACME) Method Binding (RFC 8657) to prevent certificate issue for your domain using other certificate authorities, ACME accounts or validation methods
Post title is missing Linode.
AI? Not bloated? Mmm. Will stick with Gitea.
None. One year, Lenovo has the best Linux support. Another it might not. One year, Logi makes a solid mouse. Another year they do not. One year, a company makes a great product. Another year there is a privacy scandal.
Look at what devs of the projects you use recommend, and read reviews withing 3 months of your purchase. Don’t pick a brand, pick a product.
I’ll try to answer the specific question here about importing data and sandboxing. You wouldn’t have to sandbox, but it’s a good idea. If we think of a Docker container as an “encapsulated version of the host”, then let’s say you have:
Service A
running on your cloudapt-get install -y this that and the other
to run/data/my-stuff
Service B
running on your cloudapt-get install -y other stuff
to run/data/my-other-stuff
In the cloud, the Service A
data can be accessed by Service B
, increasing the attack vector of a leak. In Docker, you could move all your data from the cloud to your server:
# On cloud
cd /
tar cvfz data.tgz data
# On local server
mkdir /local/server/
cd /local/server
tar xvfz /tmp/data.tgz ./
# Now you have /local/server/data as a copy
You’re Dockerfile
for Service A
would be something like:
FROM ubuntu
RUN apt-get install -y this that and the other
RUN whatever to install Service A
CMD whatever to run
You’re Dockerfile
for Service B
would be something like:
FROM ubuntu
RUN apt-get install -y other stuff
RUN whatever to install Service B
CMD whatever to run
This makes two unique “systems”. Now, in your docker-compose.yml
, you could have:
version : '3.8'
services:
service-a:
image: service-a
volumes:
- /local/server/data:/data
service-b:
image: service-b
volumes:
- /local/server/data:/data
This would make everything look just like the cloud since /local/server/data
would be bind mounted to /data
in both containers (services). The proper way would be to isolate:
version : '3.8'
services:
service-a:
image: service-a
volumes:
- /local/server/data/my-stuff:/data/my-stuff
service-b:
image: service-b
volumes:
- /local/server/data/my-other-stuff:/data/my-other-stuff
This way each service only has access to the data it needs.
I hand typed this, so forgive any errors, but hope it helps.
Probably, but it’s a surprisingly lightweight system. And if you are self hosting a calendar, it offers a lot of room for expansion when the time inevitability comes.
That wouldn’t work, but I see what you’re looking to do. Open an issue and I’ll make a flag 😉
Nextcloud of course. And nothing beats the 1-day “week” view widget of Business Calendar Pro. Looks just like a desktop view. It has the best year view, too.
Is there a reason it needs a PK vs just being able to point it at a local folder and running as a user with write access?
Nice! Thanks.
How does it compare to https://github.com/guillaumekln/faster-whisper?
I’ve been using Faster Whisper for a while locally, and its worked out better than raw whisper and benchmarks really well. Just curious if there are any reasons to switch.
On the GitHub issues page would be a good place. That particular option would just be added to the query.
What speed do you get from prisma?
I prefer query builders like slonik, or just raw. Prisma does crazy stuff with joins which turns what should be a simple query into 300 queries. Its a well documented problem in their issue tracker. I’ve not worked on a single repo that didn’t eventually move away from it with growth, including in a professional capacity. On top of that, you put in an ORM and everyone ends up using the same DB anyway, so you lose out on potential optimizations.
Nice job! But Prisma… Prisma… Why do people still use prisma?
Yea mine was hung on the wall with an air gap, still needed the fan hah