hey yeah, no stress!
just lemme know if you’d want someone to brainstorm with.
hey yeah, no stress!
just lemme know if you’d want someone to brainstorm with.
lemme know if you need some tshooting remotely, if schedules permit, we can do screenshares
I had this issue when I used kubernetes, sata SSDs cant keep up, not sure what Evo 980 is and what it is rated for but I would suggest shutting down all container IO and do a benchmark using fio.
my current setup is using proxmox, rusts configured in raid5 on a NAS, jellyfin container.
all jf container transcoding and cache is dumped on a wd750 nvme, while all media are store on the NAS (max. BW is 150MBps)
you can monitor the IO using IOstat once you’ve done a benchmark.
I’d check high I/O wait, specially if your all of the vms are on HDDs.
one of the solution I had for this issue was to have multiple DNS servers. solved it by buying a raspberry pi zero w and running a 2nd small instance of pihole there. I made sure that the piZeroW is plugged on a separate circuit in my home.
the person you are replying to either lacks comprehension or maybe just wants to be argumentative and doesn’t want to comprehend.
i didnt have a problem with network ports (I use a switch) what I shouldve considered during purchasing was the number of drives (sata ports), pcie features (bifurcation, version, number of nvme slots)
I need to do high IOPs for my research now and I am stuck with raid0 commodity SSDs in 3 ports.
deleted by creator
hypervisor: proxmox
vms: rhel 9.2