That might be due to your ISP’s routing and interconnects. They usually have good routes to big services and might lack good connections between home users in different countries or on different continents.
That might be due to your ISP’s routing and interconnects. They usually have good routes to big services and might lack good connections between home users in different countries or on different continents.
I did too, but shortly after decommissioning that server the drive became unresponsive. I really dodged a bullet without even realizing at the time. SMART data did not work and may have alerted me in that case.
Also, unrelated to SMART data, the server failed to do reboots because the USB-SATA adapter did not properly reset without a full power cycle (which did not happen with that mainboard’s USB on reboots). It always git stuck searching for the drive. Restarting the server therefore meant shutting it down and calling someone to push the button for me - or use Wake-On-LAN which thankfully worked but was still a dodgy workaround.
From what I read online that can lead to instabilities and was therefore disabled on Linux.
And you typically don’t get SMART-Data from USB-adapters.
Have a look into the logs of nc and see if it complains about a trusted proxy or similar. The ip range within a container network often changes between resstarts and that was a problem for me with my reverse proxy setup.
+1 for MTU and persistent keepalive. The last one helps if the connection is lost after a certain amount of time and does not recover, the first is often the problem when connection is intermittent or just “weird”.
Setting MTU requires knowing the MTU of your connection. Many ISPs provide IPv4 encapsulated in IPv6 protocol (Dual Stack Lite, I believe), meaning that from the regular package size you have to deduct the overhead of the encapsulation and if I remember correctly, also the package overhead for wireguard.
Yeah that made a massive dufference for me. Then again, it was unshielded cable so what did I expect?
But also unshielded so uncoil them entirely and do not lay them next to other data lines. I had so many dropped packages because of that.
Intel’s low power offerings are sometimes even less power hungry than a RPi and handle more stuff. I like Asrock’s line of CPU-onboard motherboards and use one myself. You get the convenience of a full x86 machine but it sips power. Mine peaks at ~36W with full load on CPU, GPU, RAM and 4 SSDs or disks. Usually it is much much lower. You can always go smaller with an Atom x5 z8300 (~2W Idle without disks or network, 6W with both and some load), but those are getting a little old and newer stuff is better and more feature-rich. Maybe an N100 machine with 4 or 8 gigs of RAM are a good option for you? Don’t go overboard with RAM if you are using docker for everything anyways. I use 8 but 4 would be more than enough for me and my countless containers. I run Nextcloud, Jellyfin, Paperless-ngx, Resilio, Photoprism and a few more. Only the minecraft server benefits from more than four. Very happy with my J5005 board.
The encoder engine is the same for all ARC GPUs, meaning you can by the lowest end one and it has the same encoding/decoding performance as the top tier one.
Fair point. These logs are only useless chatter anyway for everyone with proper key auth.
If you want to use Zigbee or Zwave you simply need a USB dongle for that. Search for “Sonoff Zigbee Stick” if you want to see what that looks like. As you are already running Armbian it should be easy to install HomeAssistant with ZHA or zigbee2mqtt on it.
I also run Traefik and it never stutters. But getting it set up at ALL was a chore. I tried four times and failed. Each time I spent several days full time on it. It’s not that I skipped the docs, I actually am a RTFM kinda guy. But too much was implied in the docs and I never really felt like I knew why I was doing stuff. At least for me it was harder to set up than Nextcloud, Jellyfin, Gitea, Resilio and Vaultwarden COMBINED.
Some settings only work in a static config file, others in a “dynamic” config file and then there is container-specific labels too. It all needs to fit in with each other and error mesages were of course hidden away in docker logs. You can attach labels to containers with and without escaping them, and choosing wrong sends you down several rabbit holes at once. The config structure is probably intuitive to Go devs, but that really ain’t me. Oh, there’s also 3 different but equal formats for conf files too.
I read countless guides and it finally worked on attempt 5. All just because I liked the autoconf for all containers. I could have been done with reverse proxies within a day had I just chosen a different one.
Now I am even debating wether I should keep it at all, because I’d rather not mount the docker sock into my reverse proxy, the one software that ultimately connects to the web directly.
I am very happy with mine and have only ever had one hiccup during updating that was due to my Dockerfile removing one dependency to many. I’ve run it bare metal (apache, mariadb) as well as containerized (derived custom image, traefik, mariadb). Both were okay in speed after applying all steps from the documentation.
Having the database on your fastest drive is definitely very important. Whenever I look at htop while making big copies or moves, it’s always mariadb that’s shuffling stuff around.
In my opinion there are 2 things that make nextcloud (appear) slow:
Managing the ton of metadata in the db that is used by nextcloud to provide the enhanced functionality
It is/was a webpage rendered mostly on the server.
The first issue is hard to tackle, because it is intrinsic and also has different optimums for different deployment scales. Optimizing databases is beyond my skillset and therefore I stick to the recommendations.
The second issue is slowly being worked around, because many applications on nextcloud now resemble SPAs, that are highly interactive and are rendered by your browser. That reduces page reloads and makes it feel more smooth.
All that said, I barely use the webinterface, because I rarely use the collaboration features. If I have to create a share I usually do that on the app because that’s where I send the link to people. Most of my usecase is just syncing files, calendars and contacts.