-
Notifications
You must be signed in to change notification settings - Fork 669
Running kubectl requires using sudo kubectl #513
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
DCO check failing |
8ec098e
to
9a2b7ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't tested it myself, but otherwise LGTM.
I wanted it to fail, when not using sudo. Previously we had a model where docker was set-uid (through the docker group) In the new model, both nerdctl and docker will run as rootless - unless you explicitely ask them to run as root (using sudo) So use the same model for Kubernetes. In the future, it might also run in fakeroot. |
Change looks good to me, but the config doesn't actually work (tried twice):
Don't have time to look closer right now; please confirm that things work as-is for you! |
Could it be a temporary network issue ? We could pull the images explicitly, to make it more obvious ?
Something that I do interactively, is to run them with xargs (or parallel) to get some progress output.
Or cache the images locally, like previously discussed. anders@lima-k8s:~$ sudo kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.1
k8s.gcr.io/kube-controller-manager:v1.23.1
k8s.gcr.io/kube-scheduler:v1.23.1
k8s.gcr.io/kube-proxy:v1.23.1
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
anders@lima-k8s:~$ sudo kubeadm config images list | xargs -n 1 sudo crictl pull
Image is up to date for sha256:b6d7abedde39968d56e9f53aaeea02a4fe6413497c4dedf091868eae09dcc320
Image is up to date for sha256:f51846a4fd28801f333d9a13e4a77a96bd52f06e587ba664c2914f015c38e5d1
Image is up to date for sha256:71d575efe62835f4882115d409a676dd24102215eee650bf23b9cf42af0e7c05
Image is up to date for sha256:b46c42588d5116766d0eb259ff372e7c1e3ecc41a842b0c18a8842083e34d62e
Image is up to date for sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
Image is up to date for sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
Image is up to date for sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03 anders@lima-k8s:~$ sudo kubeadm version --output short
v1.23.1
anders@lima-k8s:~$ sudo kubeadm config images list > images.txt
anders@lima-k8s:~$ xargs sudo ctr -n k8s.io images export images.tar < images.txt
anders@lima-k8s:~$ du -hs images.tar
216M images.tar
anders@lima-k8s:~$ sudo ctr -n k8s.io images import images.tar
unpacking k8s.gcr.io/kube-apiserver:v1.23.1 (sha256:f54681a71cce62cbc1b13ebb3dbf1d880f849112789811f98b6aebd2caa2f255)...done
unpacking k8s.gcr.io/kube-controller-manager:v1.23.1 (sha256:a7ed87380108a2d811f0d392a3fe87546c85bc366e0d1e024dfa74eb14468604)...done
unpacking k8s.gcr.io/kube-scheduler:v1.23.1 (sha256:8be4eb1593cf9ff2d91b44596633b7815a3753696031a1eb4273d1b39427fa8c)...done
unpacking k8s.gcr.io/kube-proxy:v1.23.1 (sha256:e40f3a28721588affcf187f3f246d1e078157dabe274003eaa2957a83f7170c8)...done
unpacking k8s.gcr.io/pause:3.6 (sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db)...done
unpacking k8s.gcr.io/etcd:3.5.1-0 (sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263)...done
unpacking k8s.gcr.io/coredns/coredns:v1.8.6 (sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e)...done Other flags:
|
91841d5
to
16b18fb
Compare
Signed-off-by: Anders F Björklund <[email protected]>
Signed-off-by: Anders F Björklund <[email protected]>
16b18fb
to
a5427dd
Compare
I still get the error from the provisioning script:
I could pull the image later manually, but of course the earlier error aborted the provisioning. root@lima-k8s:/var/log# kubeadm config images pull
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.23.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.23.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.23.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.23.1
[config/images] Pulled k8s.gcr.io/pause:3.6
[config/images] Pulled k8s.gcr.io/etcd:3.5.1-0
[config/images] Pulled k8s.gcr.io/coredns/coredns:v1.8.6 No time to debug right now; will look more later... |
At least it makes the network issue more clear, even if unrelated to this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Yes, I get the same error even with the old scripts, so no reason not to merge this. |
A bit of mismatch between k3s and k8s, and needs better documentation...
Make it easier to set up user config, and make
sudo kubectl
work as well.i.e. running directly on the guest, is supposed to require sudo
running remotely from the host, is configured to not use sudo
k3s, when not using
sudo
k8s, without $KUBECONFIG