Great setup! Be careful with the SSD though, Proxmox likes to eat those for fun with all those small but numerous writes. A used, small capacity enterprise SSD can be had for cheap.
Great setup! Be careful with the SSD though, Proxmox likes to eat those for fun with all those small but numerous writes. A used, small capacity enterprise SSD can be had for cheap.
I tried this. Put a DNS override for Google.com for one but not the other Adguard instance. Then did a DNS lookup and the answer (ip) changed randomly form the correct one to the one I used for the override. I’m assuming the same goes for the scenario with the l public DNS as well. In any case, the response delay should be similar, since the local pi hole instance has to contact the upstream DNS server anyway.
I see, thanks for clearing that up.
Sounds like I’ll do just that, thanks. Should I move all public facing services to that DMZ or is it enough to just isolate Traefik?
Only Nextcloud if externally available so far, maybe I’ll add Vaultwarden in the future.
I would like to use a VPN, but my family is not tech literate enough for this to work reliably.
I want to protect these public facing services by using an isolated Traefik instance in conjunction with Cloudflare and Crowdsec.
Both public and local services. I have limited hardware for now, so I’m still using my ISP router as my WLAN AP. Not the best solution, I know, but it works and I can seperate my Home-WLAN from my Guest-WLAN easily.
I want to use an AP at some point in the future, but I’d also need a managed switch as well as the AP itself. Unfortunately, thats not in my budget for now.
Thank you so much for your kind words, very encouraging. I like to do some research along my tinkering, and I like to challenge myself. I don’t even work in the field, but I find it fascinating.
The ZTA is/was basically what I was aiming for. With all those replies, I’m not so sure if it is really needed. I have a NAS with my private files, a nextcloud with the same. The only really critical thing will be my Vaultwarden instance, to which I want to migrate from my current KeePass setup. And this got me thinking, on how to secure things properly.
I mostly found it easy to learn things when it comes to networking, if I disable all trafic and then watch the OPNsense logs. Oh, my PC uses this and this port to print on this interface. Cool, I’ll add that. My server needs access to the SMB port on my NAS, added. I followed this logic through, which in total got me around 25-30 firewall rules making heavy use of aliases and a handfull of floating rules.
My goal is to have the control for my networking on my OPNsense box. There, I can easily log in, watch the live log and figure out, what to allow and what not. And it’s damn satisfying to see things being blocked. No more unknown probes on my nextcloud instance (or much reduced).
The question I still haven’t answered to my satisfaction is, if I build a strict ZTA or fall back to a more relaxed approach like you outlined with your VMs. You seem knowledgable. What would you do, for a basic homelab setup (Nextcloud, Jellyfin, Vaultwarden and such)?
This sounds promising. If I understand correctly, you have a ton of networks declared in your proxy, each for one service. So if I have Traefik as my proxy, I’d create traefik-nextcloud, traefik-jellyfin, traefik-portainer as my networks, make them externally available and assign each service their respective network. Did I get that right?
I’ve read about those two destinctions but I am simply lacking the number of ports on my little firewall box. I still only allow access to management from my PC, nothing else - so I feel good enough here. This all is more a little project for me to tinker on, nothing serious.
You’re explanation with trust makes sense. I will simply keep my current setup but put different VMs on different VLANs. Then I can seperate my local services from my public services, as well as isolate any testing VMs.
I’ve read that one should use one proxy instance for local access and one for public services with internet access. Is it enough to just isolate that public proxy or must I also put the services behind that proxy into the DMZ?
Thank you for your good explantion.
Thanks for your input. Am I understanding right, that all devices in one VLAN can communicate with each other without going through a firewall? Is that best practice? I’ve read so many different opinions that it’s hard to see.
Ah, I did not know that. So I guess I will create several VLANs with different subnets. This works as I intended it, trafic coming from one VM has to go through OPNsense.
Now I just have to figure out, if I’m being to paranoid. Should I simply group several devices together (eg, 10=Servers, 20=PC, 30=IoT; this is what I see mostly being used) or should I sacrifice usability for a more fine grained segeration (each server gets its own VLAN). Seems overkill, now that I think about it.
Nevermind, I am an idiot. You’re comment gave me thought and so I checked my testing procedure again. Turns out that, completly by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard driver as the source. I did that about 10 times. Everything is fine now. THANKS!
Both machines are easily capable of reaching around 2.2Gbps. I can’t reach full 2.5Gbps speed even with Iperf. I tried some tuning but that didn’t help, so its fine for now. I used iperf3 -c xxx.xxx.xxx.xxx
, nothing else.
The slowdown MUST be related to ZFS, since LVM as a storage base can reach the “full” 2.2Gbps when used as a smb share.
Its videos, pictures, music and other data as well. I’ll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.
The disk is owned by to PVE host and then given to the container (not a VM) as a mount point. I could use PCIe passthrough, sure, but using a container seems to be the more efficient way.
I meant mega byte (I hope that’s correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size
I don’t think it’s the CPU as I am able to reach max speed, just not using ZFS…
Good point. I used fio with different block sizes:
fio --ioengine=libaio --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name seq_read --filename=/dev/sda
4K = IOPS=41.7k, BW=163MiB/s (171MB/s)
8K = IOPS=31.1k, BW=243MiB/s (254MB/s)
IOPS=13.2k, BW=411MiB/s (431MB/s)
512K = IOPS=809, BW=405MiB/s (424MB/s)
1M = IOPS=454, BW=455MiB/s (477MB/s)
I’m gonna be honest though, I have no idea what to make of these values. Seemingly, the drive is capable of maxing out my network. The CPU shouldn’t be the problem, it’s a i7 10700.
Tubearchivist works great for me. Downloader, database and player, all in one. Even integration with jellyfin is possible, not sure about plex though.
Awesome, I’m just getting into restic!