I will reply to my own question, in case this is useful for somebody else.

I can now see that the manager will directly SSH into the cluster control node 
and also call its API.

We normally consider the management components of ACS a different network 
fabric without access between each other.

We are allowing SSH out to (2222- max size) and also to 6443 for the API from 
the manager.

Possibly there will be more requirements.

Also, for autoscaling, I guess the control node will access the manager, and 
will be perceived as coming from the k8s router public ip.

Our manager is not publicly exposed and has a corporate SSL certificate, that 
will potentially be an issue, I guess.

R.

On Mon, 2022-05-16 05:31 PM, Rafael del Valle <[email protected]> wrote:
> Hi!
> 
> I managed to create a simple k8s cluster, all objects seem to be properly 
> created, network and 2 nodes (control/work).
> 
> However the cluster is stuck in "starting" state, the manger logs:
> 
> ​May 16 15:29:22 manager java[22480]: WARN  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Timer-0:ctx-939c595d) (logid:3a42c3a6) Task (job-148) has been pending for 
> 1809 seconds
> May 16 15:29:34 manager java[22480]: INFO  [c.c.k.c.u.KubernetesClusterUtil] 
> (API-Job-Executor-80:ctx-6e7d6b62 job-148 ctx-6ab09455) (logid:64e73ca2) 
> Waiting for Kubernetes cluster : c1 control node VMs to be accessible
> 
> But if I SSH into the control node, I can see:
> 
> root@c1-control-180cd636216:/home/cloud# kubectl get nodes
> NAME                     STATUS   ROLES           AGE   VERSION
> c1-control-180cd636216   Ready    control-plane   23m   v1.24.0
> c1-node-180cd63b547      Ready    <none>          23m   v1.24.0
> 
> 
> 
> Any idea what could be going on?
> 
> R.
> 
> 

Reply via email to