Can you port-forward directly to the individual pods successfully? If that
doesn't work, then going through the service won't, so make sure that
building block is working.

Also, if you switch the service to be a NodePort service, can you hit the
web console from outside the K8s cluster without the port-forward? And
assuming that works, can you port-forward against that service
successfully? I'm not proposing you make that a permanent change, just
suggesting you try these variations to attempt to characterize the problem.

One other question: long-term, do you plan to expose the web console port
outside of the cluster? If so, you won't (shouldn't) be using kubectl
port-forward for that, and you should probably be using an ingress proxy,
so maybe just set that up and don't worry about getting the port-forward
approach to work.

Tim

On Wed, Aug 18, 2021, 1:24 PM Thai Le <lnthai2...@gmail.com> wrote:

> Thank you Justin for your suggestion.
>
> I looked at the bootstrap.xml of both broker nodes and the binding is set
> to the hostname of the pod:
> <web bind="
> http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161";
> path="web">
> <web bind="
> http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161";
> path="web">
> So it makes sense that I got a connection refused when accessing the pod
> from my desktop using localhost through port forwarding to the pod.
>
> I also see that there are 3 kubernetes services running, one for both 8161
> and 61616 (I think this is the main service that i can hit from the jms
> consumer) and 2 other that only for 8161 but for each broker node (I
> believe this is to allow clients from outside kubernetes to access the web
> console using IP, giving that routing from outside cluster to the service
> IP is present):
> kubectl get services -n myproject
> NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP
> PORT(S)              AGE
> activemq-artemis-operator   ClusterIP   10.100.240.205   <none>
>  8383/TCP             46h
> ex-aao-hdls-svc             ClusterIP   None             <none>
>  8161/TCP,61616/TCP   134m
> ex-aao-ping-svc             ClusterIP   None             <none>
>  8888/TCP             134m
> ex-aao-wconsj-0-svc         ClusterIP   *10.96.183.20*     <none>
>  8161/TCP             134m
> ex-aao-wconsj-1-svc         ClusterIP   *10.98.233.91*     <none>
>  8161/TCP             134m
>
> Here is the description of the main service:
> kubectl describe service ex-aao-hdls-svc -n myproject
> Name:             * ex-aao-hdls-svc*
> Namespace:         myproject
> Labels:            ActiveMQArtemis=ex-aao
>                    application=ex-aao-app
> Annotations:       <none>
> Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app
> Type:              ClusterIP
> IP Family Policy:  SingleStack
> IP Families:       IPv4
> IP:                None
> IPs:               None
> Port:              console-jolokia  8161/TCP
> TargetPort:        8161/TCP
> Endpoints:         *10.1.0.30*:8161,*10.1.0.31*:8161
> Port:              all  61616/TCP
> TargetPort:        61616/TCP
> Endpoints:         *10.1.0.30*:61616,*10.1.0.31*:61616
> Session Affinity:  None
> Events:            <none>
>
> And here is the description the other 2 services:
> kubectl describe service ex-aao-wconsj-0-svc -n myproject
> Name:              ex-aao-wconsj-0-svc
> Namespace:         myproject
> Labels:            ActiveMQArtemis=ex-aao
>                    application=ex-aao-app
> Annotations:       <none>
> Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app,
> statefulset.kubernetes.io/pod-name=ex-aao-ss-0
> Type:              ClusterIP
> IP Family Policy:  SingleStack
> IP Families:       IPv4
> IP:                *10.96.183.20*
> IPs:               *10.96.183.20*
> Port:              wconsj-0  8161/TCP
> TargetPort:        8161/TCP
> Endpoints:         *10.1.0.30*:8161
> Session Affinity:  None
> Events:            <none>
>
> kubectl describe service ex-aao-wconsj-1-svc -n myproject
> Name:              ex-aao-wconsj-1-svc
> Namespace:         myproject
> Labels:            ActiveMQArtemis=ex-aao
>                    application=ex-aao-app
> Annotations:       <none>
> Selector:          ActiveMQArtemis=ex-aao,application=ex-aao-app,
> statefulset.kubernetes.io/pod-name=ex-aao-ss-1
> Type:              ClusterIP
> IP Family Policy:  SingleStack
> IP Families:       IPv4
> IP:                *10.98.233.91*
> IPs:               *10.98.233.91*
> Port:              wconsj-1  8161/TCP
> TargetPort:        8161/TCP
> Endpoints:         *10.1.0.31*:8161
> Session Affinity:  None
> Events:            <none>
>
> The 2 pods hosting the broker nodes are ex-aao-ss0 and ex-aao-ss1:
> kubectl get all -o wide -n myproject
> NAME                                            READY   STATUS    RESTARTS
>   AGE    IP
> pod/activemq-artemis-operator-bb9cf6567-qjdzs   1/1     Running   0
>  46h    10.1.0.6
> pod/debug                                       1/1     Running   0
>  162m   10.1.0.29
> pod/ex-aao-ss-0                                 1/1     Running   0
>  155m   *10.1.0.30*
> pod/ex-aao-ss-1                                 1/1     Running   0
>  154m   *10.1.0.31*
>
> Hence, from another pod in the same cluster I can access the web console :
> curl -L http://*ex-aao-hdls-svc*:8161, so I should be able to port forward
> using this service instead of the pod:
> C:\Users\nle>kubectl port-forward service/ex-aao-hdls-svc 8161:8161 -n
> myproject
> Forwarding from 127.0.0.1:8161 -> 8161
> Forwarding from [::1]:8161 -> 8161
>
> However, hitting http://localhost:8161 from my desktop still give the same
> error:
>
> Handling connection for 8161
> Handling connection for 8161
> E0818 14:51:30.135226   18024 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/18 18:51:26 socat[1906] E connect(17, AF=2
> 127.0.0.1:8161, 16): Connection refused
> E0818 14:51:30.136855   18024 portforward.go:400] an error occurred
> forwarding 8161 -> 8161: error forwarding port 8161 to pod
> ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid :
> exit status 1: 2021/08/18 18:51:26 socat[1907] E connect(17, AF=2
> 127.0.0.1:8161, 16): Connection refused
>
> Do you have any other suggestion?
>
> Thai Le
>
>
> On Wed, Aug 18, 2021 at 2:10 PM Justin Bertram <jbert...@apache.org>
> wrote:
>
> > If the embedded web server which serves the console (as configured in
> > bootstrap.xml) is bound to localhost then it will never be accessible
> from
> > a remote machine. You need to bind it to an IP or hostname which is
> > externally accessible.
> >
> >
> > Justin
> >
> > On Tue, Aug 17, 2021 at 2:58 PM Thai Le <lnthai2...@gmail.com> wrote:
> >
> > > Hello,
> > >
> > > I am not sure if questions regarding Artemis cloud can be asked here
> but
> > > since i found no mailing list for artemis cloud and the slack channel
> > needs
> > > an invitation to join I'm gonna try my luck here.
> > >
> > > I installed the Artemis operator and an ActiveMQArtemis with a
> deployment
> > > plan of 2 brokers on my single node kubernetes (docker-desktop), here
> is
> > > the deployment:
> > >
> > > apiVersion: broker.amq.io/v2alpha5
> > > kind: ActiveMQArtemis
> > > metadata:
> > >   name: ex-aao
> > > spec:
> > >   adminUser: brokerAdmin
> > >   adminPassword: verySecret
> > >   deploymentPlan:
> > >     size: 2
> > >     image: placeholder
> > >     podSecurity:
> > >       runAsUser: 0
> > >   console:
> > >     expose: true
> > >     sslEnabled: false
> > >
> > > The 2 brokers are running and I can curl the web console from another
> pod
> > > in the same kubernetes cluster. However, I cannot access the web
> console
> > > from my desktop (http://localhost:8161/console). I also tried to port
> > > forward requests to port 8161 from my desktop to one of the 2 artemis
> > pods
> > > but it does not work either.
> > >
> > > I would appreciate it if anyone could give me a hint as to what may be
> > > wrong or a direction to artemis cloud mailing list
> > >
> > > Thai Le
> > >
> >
>
>
> --
> Where there is will, there is a way
>

Reply via email to