Another thing to remember is the client needs to support decoding the video in hardware or have enough CPU to handle it in software. I have intel i7 (3rd gen) with no hardware HEVC/x265 support but it has enough CPU to power through.
Another thing to remember is the client needs to support decoding the video in hardware or have enough CPU to handle it in software. I have intel i7 (3rd gen) with no hardware HEVC/x265 support but it has enough CPU to power through.
Self-host your own ACME server. Then you can use certbot pointed there.
These instructions are old so not sure if newer/better ways, https://blog.sean-wright.com/self-host-acme-server/
Is MariaDB on spinning disk or ssd?
I initially set up Nextcloud with MariaDB on spinning disk but it was slow even completely empty. I moved that container to ssd & performance was a lot better. The web UI may still have some slow loading parts but I can’t say for sure since rarely use it. Caldav+carddav+Nextcloud client are how I usually interact with it.
Sounds like bridge mode is needed for the vm’s network interface in virt.
I would say proxmox ve is easier to start with.
The container method used should be whatever you are more familiar with or prefer. They both have their own quirks, pros, & cons.
SELinux - If you don’t want to deal with SELinux then set it to permissive mode. If you want to keep in enforcing mode you need to create the appropriate policies, https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/configuring-selinux-for-applications-and-services-with-non-standard-configurations_using-selinux
Firewall - If you don’t want it’s protection then look up instructions to stop & disable it on your distro.
Port forwarding - From linux container side you either need to specify host networking or the ports you want to allow through, there is no avoiding that if it needs to be network accessible. If you want it internet accessible then you need to setup port forwarding on your router.
Have you looked into something like yunohost? It may be the kind of thing you’re looking for.
If router supports it, a static route via connected machine with IP forwarding enabled might work. OpenWrt has packages for things like tailscale and zerotier so could do it without an extra machine too.
For 3, if router supports it could also try doing static route via Tailscale joined machine that has IP forwarding enabled
If your router lets you try adding a static route for the tailscale IP/subnet to the laptop with IP forwarding enabled.
Was it the official container image or 3rd party? Whichever it was, they should get notified so that init script can get fixed to prevent similar happening to others.
Intel Quick Sync video saw a lot of improvements on 8th gen & since it’s all so old the pricing differences between 7th & 8th gen are going to be negligible.
Yep, 8th gen (Coffee Lake) saw a lot of improvements in Intel Quick Sync, https://en.wikipedia.org/wiki/Intel_Quick_Sync_Video#Hardware_decoding_and_encoding
For the sata drive behavior it’s probably finishing the writes from buffer. I like to use the iotop utility to watch storage IO activity on my systems. Could try running it on both systems to get a better picture of what’s going on.
I currently use NFS and CIFS but have used iSCSI in the past. I like the simplicity of NFS & CIFS and they meet my uses. iSCSI has it’s strengths as others have stated.
Nothing to stop running podman containers with full root access by creating & running them as root, you run them as whatever user you want. I’ve done it to troubleshoot containers on more than one occasion, usually when I want to play with VPN or privileged ports but too lazy to do it proper. The end goal for a lot of ppl, including myself, is to run as many things as non-root as possible. Why? Best practices around security have you give a service the minimal access & resources it needs to do it’s tasks. Some people allow traffic from the internet to their containers & they probably feel a little bit safer running those programs as non-root since it can create an extra layer that may need to be broken to fully compromise a system.
Sounds like the drives are combined with RAID 5. Could be hardware RAID card or software RAID as part of the BIOS. Server model number can be used to search for administrator manual and may have more info there. If it’s hardware RAID card then try to find the model number & search for it’s manual. If it’s software raid at the BIOS level then motherboard/server manual will cover it. Should be some messages and prompts during boot related to it. Terms to look for ‘RAID’, ‘storage controller’, ‘Perc’, ‘LSI’.
Most standalone APs can be plugged into the router and immediately start working, they’ll forward along DHCP requests. You can turn off your router’s wifi after they have been configured. For Unifi APs you only need the controller running when you want to manage/update the APs and for stats collection, I only power mine up to check for new firmware updates once a month. Can disable Unifi analytics/telemetry with a config file option too but no way to do it via web UI.
For VLANs you will need to configure the VLANs on Opnsense and the APs. Unifi lets you specify the mgmt VLAN and VLAN per SSID. For my setup I have vlan 5 for work ssid, 10 for mobile devices, 15 for IoT and other things that don’t need internet, and 20 for a couple temporary & guest SSIDs.
The Unifi APs are alright but the controller software itself is fairly limited for stats/data, still better than other standard consumer APs I’ve used though. I’ve been wanting to try out Grandstream Wifi APs for replacement as most models include a built-in controller capable of managing more than enough APs for my home uses and still have option of standalone controller or cloud managed but it’s not priority as my current APs still receive firmware updates,
Another benefit to LXC is you can map devices, including GPU, to multiple LXC while keeping them accessible to the host. For my home setup I currently have 3 LXC with access to the iGPU, 1 for jellyfin+caddy via podman nested, 1 for moonfire-nvr via podman nested, and been trying to use 1 to figure out hardware transcoding with owncast through multiple install methods but no luck so far. I’ve also been playing with mapping rtl-sdr v3 devices, zigbee stick, zwave stick, and coral usb for a variety of projects lately.
edit: I forgot to answer the question and went straight to ranting, lol. LXC is like a bare-metal VM. You can install & run multiple things on them like a normal VM including podman or docker.
This project, https://neko.m1k1o.net/#/getting-started/examples , looks like a good base to try running regular GUI apps via docker & web.
edit: and here’s the git with Dockerfiles, https://github.com/m1k1o/neko-apps
On proxmox you should be able to share any GPU (integrated or dedicated) to multiple LXCs while keeping it accessible to the host. I use intel integrated GPU in LXC for plex, jellyfin, and one with just ffmpeg I use to convert videos occasionally. I used these instructions as starting point/base when I set mine up on proxmox v7.x, https://forum.proxmox.com/threads/plex-hw-transcoding-lxc-and-jasper-lake-igpu-passthru.116163/
I had looked at instructions to assign the GPU to a specific VM but it looked like way too much work and people were saying it didn’t always work for the 11th gen iGPUs. Thankfully I ran across the sharing method and it’s been running stable since.
My info may be outdated as I last had G Fiber about a year ago but have moved out of their service area so stuck with AT&T fiber along with their horrible modem+router :(
When I first got the 2G down/1G up G Fiber service there was no bridge mode & had to use their provided device as modem+router+wifi. They updated it to add in a bridge mode option but I never tested it. I had dropped back down to 1G down & up before that option was available.
edit: forgot to mention I had read some people had luck using Unifi Dream Machine to plug in G Fiber’s 2.5G SFP looking module but I wasn’t willing to spend any more money on anything Unifi besides WiFi APs.
I had issues with DNS checks and traced it to my pihole. I changed that container’s resolv.conf to use cloudflare DNS and it has been working fine since. It was with Caddy so needed to change over to use IPs.