GitHub user n4l5u0r added a comment to the discussion: Unable to have k8s 
Running in Cloudstack 4.20.0

looks like having issues here : 

```bash

cloud@test-k8S-control-19a3a5cfc67:~$ sudo -i
root@test-k8S-control-19a3a5cfc67:~# kubectl get nodes
NAME                           STATUS   ROLES           AGE   VERSION
test-k8s-control-19a3a5cfc67   Ready    control-plane   63m   v1.32.5
root@test-k8S-control-19a3a5cfc67:~# kubectl get pods -A
NAMESPACE     NAME                                                   READY   
STATUS              RESTARTS       AGE
kube-system   calico-kube-controllers-7bfdc5b57c-9vnph               0/1     
ContainerCreating   0              56m
kube-system   calico-node-2q75j                                      0/1     
CrashLoopBackOff    11 (25m ago)   56m
kube-system   coredns-668d6bf9bc-5v8n5                               0/1     
Pending             0              63m
kube-system   coredns-668d6bf9bc-s8blq                               0/1     
Pending             0              63m
kube-system   etcd-test-k8s-control-19a3a5cfc67                      1/1     
Running             0              64m
kube-system   kube-apiserver-test-k8s-control-19a3a5cfc67            1/1     
Running             0              64m
kube-system   kube-controller-manager-test-k8s-control-19a3a5cfc67   1/1     
Running             0              64m
kube-system   kube-proxy-lkbjz                                       1/1     
Running             0              63m
kube-system   kube-scheduler-test-k8s-control-19a3a5cfc67            1/1     
Running             0              64m

``` 

``` bash
root@test-k8S-control-19a3a5cfc67:~# kubectl get event -A
NAMESPACE     LAST SEEN   TYPE      REASON              OBJECT                  
                        MESSAGE
kube-system   58m         Warning   FailedScheduling    
pod/calico-kube-controllers-7bfdc5b57c-9vnph    0/1 nodes are available: 1 
node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 
nodes are available: 1 Preemption is not helpful for scheduling.
kube-system   58m         Normal    Scheduled           
pod/calico-kube-controllers-7bfdc5b57c-9vnph    Successfully assigned 
kube-system/calico-kube-controllers-7bfdc5b57c-9vnph to 
test-k8s-control-19a3a5cfc67
kube-system   58m         Normal    SuccessfulCreate    
replicaset/calico-kube-controllers-7bfdc5b57c   Created pod: 
calico-kube-controllers-7bfdc5b57c-9vnph
kube-system   58m         Normal    ScalingReplicaSet   
deployment/calico-kube-controllers              Scaled up replica set 
calico-kube-controllers-7bfdc5b57c from 0 to 1
kube-system   58m         Normal    Scheduled           pod/calico-node-2q75j   
                        Successfully assigned kube-system/calico-node-2q75j to 
test-k8s-control-19a3a5cfc67
kube-system   58m         Normal    Pulled              pod/calico-node-2q75j   
                        Container image "docker.io/calico/cni:v3.30.0" already 
present on machine
kube-system   58m         Normal    Created             pod/calico-node-2q75j   
                        Created container: upgrade-ipam
kube-system   58m         Normal    Started             pod/calico-node-2q75j   
                        Started container upgrade-ipam
kube-system   58m         Normal    Pulled              pod/calico-node-2q75j   
                        Container image "docker.io/calico/cni:v3.30.0" already 
present on machine
kube-system   58m         Normal    Created             pod/calico-node-2q75j   
                        Created container: install-cni
kube-system   58m         Normal    Pulled              pod/calico-node-2q75j   
                        Container image "docker.io/calico/node:v3.30.0" already 
present on machine
kube-system   58m         Normal    Started             pod/calico-node-2q75j   
                        Started container mount-bpffs
kube-system   58m         Normal    Pulled              pod/calico-node-2q75j   
                        Container image "docker.io/calico/node:v3.30.0" already 
present on machine
kube-system   58m         Normal    Created             pod/calico-node-2q75j   
                        Created container: calico-node
kube-system   58m         Normal    SuccessfulCreate    daemonset/calico-node   
                        Created pod: calico-node-2q75j
kube-system   60m         Warning   FailedScheduling    
pod/coredns-668d6bf9bc-5v8n5                    0/1 nodes are available: 1 
node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 
nodes are available: 1 Preemption is not helpful for scheduling.
kube-system   58m         Normal    Scheduled           
pod/coredns-668d6bf9bc-5v8n5                    Successfully assigned 
kube-system/coredns-668d6bf9bc-5v8n5 to test-k8s-control-19a3a5cfc67
kube-system   60m         Warning   FailedScheduling    
pod/coredns-668d6bf9bc-s8blq                    0/1 nodes are available: 1 
node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 
nodes are available: 1 Preemption is not helpful for scheduling.
kube-system   58m         Normal    Scheduled           
pod/coredns-668d6bf9bc-s8blq                    Successfully assigned 
kube-system/coredns-668d6bf9bc-s8blq to test-k8s-control-19a3a5cfc67

``` 


GitHub link: 
https://github.com/apache/cloudstack/discussions/11951#discussioncomment-14838974

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to