not yet, it is planned but not there yet
not yet, it is planned but not there yet
is there an advantage to have apps embedded as iframe? as opposed to opening the url in a new tab?
like forwarding auth?
the only annoying thing is that is not possible to spin more than one homepages at the same time.
so i have one homarr and one homepage
for docker the syntax is --gpus all
https://docs.docker.com/config/containers/resource_constraints/#expose-gpus-for-use
bonus: syntax to expose the gpu in a docker compose
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
i have a public ip and my own domain attached to that. i use subdomains for each service and a dashboard on the root domain.
i don’t use authelia etc, and rely on the autentication page of each service. but i have fail2ban.
i did help them set up the apps, but they took from there. the dashboard on the root domain helps them navigate all services without having to remember the full url.
yes… maybe.
as the dev said, it flags a lot of false positive. so a human should look at them anyway.
maybe when this is a bit more evolved, we can use it to preprocess posts, and if a post gets flagged for something, a mod / admin needs to approve the post manually.
maybe for CASM, it gets sent to an external service specialized to that stuff, so the mod / admin doesn’t have to look at the images.
it uses a model that describes a photo, then it searches the generated description for some terms and ranks the image to some levels of safety.
to test it you use a more general filter, for all nsfw for example, and see if the matches are correct.
if it helps, here is my setup from bare metal to 30+ services. https://github.com/simone-viozzi/my-server
including off-site encrypted backups
of course, i wrote this for me, so most stuffs are written like garbage, but fell free to open an issue, and i will fix them
to run on ROCm, you need a specific version of pytorch.
but it is still in beta, i would not expect it to run well
the model under the hood is clip interrogator, and it looks like it is just the torch model.
it will run on cpu, but we can do better, an onnx version of the model will run a lot better on cpu.
check the transcoding section on the server and on the phone
All the services available from internet, just goes through traefik to terminate https, I rely on the build in authentication of each service. To add another layer of security, I have fail2ban active on all those services.
I have a public IP, and I have open on my router ports 80, 443, a random port for ssh and vpn.
Memory:
System RAM: total: 8 GiB available: 7.73 GiB used: 4.46 GiB (57.7%)
Report: arrays: 1 slots: 4 modules: 2 type: DDR3
CPU:
Info: 6-core model: AMD Phenom II X6 1090T bits: 64 type: MCP cache: L2: 3 MiB
Graphics:
Device-1: NVIDIA GP107 [GeForce GTX 1050 Ti] driver: nvidia v: 535.98
All the docker compose files + how I configured everything is available at: https://github.com/simone-viozzi/my-server
Since I like the ability of btrfs to do snapshots, I created all important docker volumes as btrfs subvolumes. Then I created a backup script that literally sends the subvolume (encrypted) to an external cloud. This does not allow incremental backups and most likely is not the best backup solution… but it works… the repo is: https://github.com/simone-viozzi/btrfs2cloud-backup
if you configure homepage with docker labels, and have multiple homepage instances, they will all be the same. since there is no way to specify the instance on the label.
reference: