Hello !
I currently have a problem on my kubernetes cluster.
I have 3 nodes:
- 192.168.0.16
- 192.168.0.65
- 192.168.0.55
I use a storage class nfs (sigs/nfs-subdir-external-provisioner) to use an NFS.
The NFS is actually set up on the 192.168.0.55 which is also a worker node then.
I noticed that i have problems mounting volumes when a pod is created on the 192.168.0.55 node. If its one of the other two, it mounts. (The error is actually a permission denied on the 192.168.0.55 node)
I would guess that something goes wrong when kube tries to mount to NFS since it’s on the same machine ?
Any idea on how i can fix this? Cheers !
Could be trying to mount it loopback instead of by ip. What does your exports file look like? Can you do a mount from 192.168.0.55 manually?
Hello @theit8514
You are actually spot on ^^
I did look in my exports file which was like so :
/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)
I added a localhost line in case:
/mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)
It didn’t solve the problem. I went to investigate with the mount command:
-
Will mount on 192.168.0.65:
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will NOT mount on 192.168.0.55 (NAS):
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will mount on 192.168.0.55 (NAS):
mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test
The
mount -t nfs 192.168.0.55
is the one that the cluster does actually. So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolveEDIT:
I was acutally WAY simpler.
I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^
Thanks a lot for your help@theit8514@lemmy.world !
-
New Lemmy Post: [K8S] nfs and mounting problems (https://lemmy.world/post/11957341)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md