Do you want to not use your DL380? IF no it might make a good moonlight host!
Do you want to not use your DL380? IF no it might make a good moonlight host!
I don’t use unraid by my advice for everyone is that you can’t have too many backups of data that you really care about, use the 3-2-1 rule at a minimum.
Also, welcome to your new hobby you will love and hate at the same time sometimes :D
hmm the last line in the log above there says:
“[Fatal ConsoleApp: The requested address is not valid in this context. This can happen if another instance of Sonarr is already running another application is using the same port (default: 8989) or the user has insufficient permissions Press enter to exit.“
So that sounds like that the container might be running but sonarr is not. Did you ever get it working?
Your netstat command shows a process named docker-proxy using that port, which confirms what the log says. If your container isn’t running you can try to find the process using it with netstat or lsof, it might be a stale container process or something but a reboot is often faster than figuring out what it is to see if that clears up whatever is using the port.
In addition to all of the suggestions here you can easily do this with almost all major DNS providers today like Cloudflare and AWS Route 53, there are many community containers and scripts to keep the record in sync depending on what else you are using on your network.
In addition to those things you can also thin provision lvm volumes which is helpful sometimes and it even has built in caching. It really is just a much more flexible way of using a disk, it is not an an analog for RAID, you would typically use a RAID volume with LVM on top.
the purpose of using nginx is to not have to use the port number in this scenario, the reason it works is because your DNS for that hostname still points to that machine that both containers are running on. Normal DNS A and cname records do not contain port information.
The 502 bad gateway error means that nginx is not able to connect to the upstream host for that hostname, this is where you need to use the port for the other container (5870). Do know that using localhost in docker will not have the results you are expecting, if these are on the same host you can use the name you have configured for the container as the hostname in nginx otherwise use the host IP, in your case it would be http://listmonk_app:5870.
Hope that helps!
Oh yeah for sure, every time I’m like “it can’t be spanning tree” it is spanning tree. Do you mean copper vs fiber? LC connectors can carry a variety of speeds but generally yeah I try to use fiber or DAC cables which are shielded wherever I can.
So then it doesn’t work across the ubiquity switch just to double check? If so, you will need to enable jumbo frames on that for sure and it is not enabled by default and that could also explain the throughput as it is having to fragment then defragment the frames to cross the switch or iperf is using MSS to determine that it can only send 1500 byte frames, your slower speed is about line rate for 1500 byte frames no matter the speed of the actual link.
ETA: you can verify this by pinging with a large size and setting the “do not fragment” flag, so something like ‘ping -s 2000 -M do ip.addr ’ on Linux, windows uses different flags.
Can you draw a picture of how you have all 3 switches connected with all of the wires? I am suspicious that you are creating a switching loop or spanning tree isn’t picking the optimal link on accident so I’m curious.
I would get one 2x32 kit somewhere you can return it (or even 1x32 if you are worried) and try it out, sometimes it does work but sometimes it won’t POST. Like the other person said, it might work but there really isn’t a way to know for sure other than that. I have run into situations with systems like that where that was just the largest available at release date for them to test and validate and larger DIMMs work fine so it’s probably worth testing in my opinion.
I am curious myself, let me know if you do test it, those look like cool machines for small clusters.
Please do! I am real curious now, this is definitely something weird haha!
No problem! I actually am still using 8500k still since it was new for what it’s worth, those things are great no complaints.
That’s interesting, this definitely is an odd problem for sure haha. Another wild idea, do you have jumbo frames enabled anywhere on your network?
As for the ISP, it might be, have you tried multiple VPN providers to see if the problem follows between them?
I have run into this issue a lot, I have always found that most of the tutorials set things up in isolation and never talk about integration points or how to build a whole solution.
On the MetalLB configmap point, that’s another issue I have run into. In the earlier days of metallb it was configured differently and the configmap was automatically created but that has since changed, took me a bit to figure out when that changed as their docs aren’t explicit if I remember correctly. Annoying either way.
I think the reason most tutorials turn off the firewall is in a well configured cloud environment like AWS the host firewall is redundant due to security groups and that is what everyone targets the tutorials for unfortunately and they never explain that even with “disable this if you have other mitigating controls in place” or something.
I have also wondered if we have finally reached the era where the majority of content creators and consumers have never touched an on-prem network and don’t even think about that lens anymore, another good example of this is trying to configure MetalLB in a host with multiple interface that don’t have the same networks available (you know, like using dedicated interfaces for storage like you should), for a long time it just wasn’t possible and metallb would announce all networks on all interfaces which made it basically not functional heh. Whatever the reason is, you are not alone in being annoyed :D
Anyway, these are great points, I have been pondering writing up a larger set of tutorial about my setup since it’s more similar to a small enterprise anymore, I should get on that hah.
Yeah that is true, I had mixed results on the quality of the video using the 7th gen ones so I usually recommend at least 8th gen but that may have improved by now.
As the other person said, NUCs and such are able to do transcodes via Intel QuickSync hardware acceleration, it’s not really possible to transcode 4k in realtime on most CPUs without it.
You will need at least an 8th gen Intel processor to do HVEC which is what h265 uses, more info is in this chart on Wikipedia about which generations support which things. Anecdotally, this has worked extremely well for me for a long time, definitely worth it.
Also be aware if you are doing any virtualization you will need to pass the iGPU through to the guest machine.
You may be right about the processing power, that device was underpowered when it was new. Do you have the VPN terminating on the USG or on the end device?
Also, do you have smart queuing disabled on the WAN interface? That causes all kinds of issues on higher bandwidth connections.
You are right, more specifically in case anyone is curious it usually has to be whomever owns the public IP addresses because that is who would own the reverse zone for that IP block according to the internet root dns servers in most circumstances. In OPs case you are probably right, this is probably the VPS provider but not always.
They also make this one which uses a CM4 but you can control 4 machines! I have been eying it now that I can get CM4s again, thanks for the post!