NPM as in nginx and not Node Package Manager?
When you said Jellyfin streaming isn’t working - are you able to actually get to Jellyfin UI and its the stream failing, or you can’t access Jellyfin at all via nginx?
NPM as in nginx and not Node Package Manager?
When you said Jellyfin streaming isn’t working - are you able to actually get to Jellyfin UI and its the stream failing, or you can’t access Jellyfin at all via nginx?
It depends whether a whole season torrent exists or not. If sonarr can identify one thats a whole season, it should download that when you search at season level. If youve searched individual episode at a time, youll get a single one.
You can do an interactive search and iirc specify full season during that search
Check on activity page to see if its stuck on found/downloading/extracting/importing
Check trackers/sources aren’t down
Check in log.txt for exceptions
When you tried caddy and received an error, that looks like you are getting the wrong image name.
Then you mentioned deleting caddyfile as the configuration didn’t work. But, if I am following correctly the caddyfile wouldn’t yet be relevant if the caddy container hadn’t actually ran.
Pulling from Caddys docs, you should just need to run
$ docker run -d -p 80:80 \
-v $PWD/Caddyfile:/etc/caddy/Caddyfile \
-v caddy_data:/data \
caddy
Where $PWD is the current directory the terminal is currently in.
Further docs for then configuring for HTTPs you can find here under
Automatic TLS with the Caddy image
Like other commenter said, regardless of podman or docker you will need to handle port forwarding, and any firewall changes.
Port forwarding through docker or podman is pretty similar, if not identical.
I have heard good things about podman but I personally had some strange issues when moving from docker to podman, specifically transferring docker networks to the podman equivalent.
When I am home Ill get an example from my setup 👍
On my phone so I haven’t got the access to give you a good example.
You see in your compose file in your original post you have ‘8080:8080’ under ports?
You should be able to add another line, the left hand side of the colon exposing a different port like so
…
ports:
- ‘8080:8080’
- ‘9090:9090’
…
then one service you can access on port 8080 and the other you access on 9090
then under each service you want to expose you add the other port mappings
qtorrent:
ports:
- 8080:8080
sabnzb:
ports:
- 9090:8080
edit - so you should end up with the vpn container exposing 8080 which points to the service exposing 8080 which maps to application listening on 8080
and the same for 9090 -> 9090 -> 8080
In the VPN service you just expose the port you want and map it to the listener port on the service
vpn: ports: - 5000:8080 - 6000:8080
where you have
servicea listening on 8080 and serviceB on 8080 but exposed on 5000 and 6000 in the VPN service
for example
You can also map different ports to the container. For sake of argument lets say qtorrent had a fixed port you cannot change, that’s just what the application listens to. You can then map a different container port to that application port.
tldr, OP, you can’t have two containers in docker on the same container port
made things slow
That’s probably referring to how file systems are handled. Going from WSL to windows file system is slower than using the “proper” mount point
Unrestricted
yes
Install an OS on the card to boot from? Its the same process as making a bootable live USB stick.
The performance will be poor in comparison to an SSD and will reduce the longevity of the card due to many r/w operations.
Some things, or points, to consider.
good luck have fun!