For AI/ML workloads the VRAM is king
As you are starting out something older with lots of VRAM would be better than something faster with less VRAM for the same price.
The 4060 ti is a good baseline to compare against as it has a 16GB variant
“Minimum” VRAM for ML is around 10GB the more the better, less VRAM could be usable but with sacrefices with speed and quality.
If you like that stuff in couple of months, you could sell the GPU that you would buy and swap it with 4090 super
For AMD support is confusing as there is no official support for rocm (for mid range GPUs) on linux but someone said that it works.
There is new ZLUDA that enables running CUDA workloads on ROCm
https://www.xda-developers.com/nvidia-cuda-amd-zluda/
I don’t have enough info to reccomend AMD cards
Modem translates fiber signals / DSL into twisted pair cable
Acces point translates twisted pair into wifi
I think you are looking for all in one router