I tried port-forward directly to the pod, it didn't work: C:\Users\nle>kubectl port-forward pod/ex-aao-ss-0 8161:8161 -n myproject Forwarding from 127.0.0.1:8161 -> 8161 Forwarding from [::1]:8161 -> 8161 Handling connection for 8161 Handling connection for 8161 E0820 09:06:52.508157 13064 portforward.go:400] an error occurred forwarding 8161 -> 8161: error forwarding port 8161 to pod ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid : exit status 1: 2021/08/20 13:06:46 socat[31487] E connect(17, AF=2 127.0.0.1:8161, 16): Connection refused E0820 09:06:52.522192 13064 portforward.go:400] an error occurred forwarding 8161 -> 8161: error forwarding port 8161 to pod ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid : exit status 1: 2021/08/20 13:06:46 socat[31488] E connect(17, AF=2 127.0.0.1:8161, 16): Connection refused
Since the bootstrap.xml indicates the web server is binding to http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161, I also tried to add the "ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local" to my windows hosts file which map it to 127.0.0.1 then try access it from my desktop browser but it's still not working. If i change the 2 ClusterIP services (*ex-aao-wconsj-0-svc* and *ex-aao-wconsj-1-svc*) that expose 8161 for the 2 brokers to NodePort then I can access the web console directly from outside the cluster without port forwarding. I don't understand why this works but direct port forwarding to the pod does not. The purpose of this exercise is to make sure that when developing the application locally we can point to artemis running on a cluster and observe/debug message distribution between multiple artemis brokers. In production, we also need access to the console of each broker at the same time for troubleshooting, my original thought is just port-forward to each pod when needed. Forgive me for my limited knowledge of kubernetes, but as I understand, ingress is to load balance http traffic so at one point in time, the console of a particular broker can be accessed. Thai Le On Fri, Aug 20, 2021 at 12:14 AM Tim Bain <tb...@alumni.duke.edu> wrote: > Can you port-forward directly to the individual pods successfully? If that > doesn't work, then going through the service won't, so make sure that > building block is working. > > Also, if you switch the service to be a NodePort service, can you hit the > web console from outside the K8s cluster without the port-forward? And > assuming that works, can you port-forward against that service > successfully? I'm not proposing you make that a permanent change, just > suggesting you try these variations to attempt to characterize the problem. > > One other question: long-term, do you plan to expose the web console port > outside of the cluster? If so, you won't (shouldn't) be using kubectl > port-forward for that, and you should probably be using an ingress proxy, > so maybe just set that up and don't worry about getting the port-forward > approach to work. > > Tim > > On Wed, Aug 18, 2021, 1:24 PM Thai Le <lnthai2...@gmail.com> wrote: > > > Thank you Justin for your suggestion. > > > > I looked at the bootstrap.xml of both broker nodes and the binding is set > > to the hostname of the pod: > > <web bind=" > > http://ex-aao-ss-0.ex-aao-hdls-svc.myproject.svc.cluster.local:8161" > > path="web"> > > <web bind=" > > http://ex-aao-ss-1.ex-aao-hdls-svc.myproject.svc.cluster.local:8161" > > path="web"> > > So it makes sense that I got a connection refused when accessing the pod > > from my desktop using localhost through port forwarding to the pod. > > > > I also see that there are 3 kubernetes services running, one for both > 8161 > > and 61616 (I think this is the main service that i can hit from the jms > > consumer) and 2 other that only for 8161 but for each broker node (I > > believe this is to allow clients from outside kubernetes to access the > web > > console using IP, giving that routing from outside cluster to the service > > IP is present): > > kubectl get services -n myproject > > NAME TYPE CLUSTER-IP EXTERNAL-IP > > PORT(S) AGE > > activemq-artemis-operator ClusterIP 10.100.240.205 <none> > > 8383/TCP 46h > > ex-aao-hdls-svc ClusterIP None <none> > > 8161/TCP,61616/TCP 134m > > ex-aao-ping-svc ClusterIP None <none> > > 8888/TCP 134m > > ex-aao-wconsj-0-svc ClusterIP *10.96.183.20* <none> > > 8161/TCP 134m > > ex-aao-wconsj-1-svc ClusterIP *10.98.233.91* <none> > > 8161/TCP 134m > > > > Here is the description of the main service: > > kubectl describe service ex-aao-hdls-svc -n myproject > > Name: * ex-aao-hdls-svc* > > Namespace: myproject > > Labels: ActiveMQArtemis=ex-aao > > application=ex-aao-app > > Annotations: <none> > > Selector: ActiveMQArtemis=ex-aao,application=ex-aao-app > > Type: ClusterIP > > IP Family Policy: SingleStack > > IP Families: IPv4 > > IP: None > > IPs: None > > Port: console-jolokia 8161/TCP > > TargetPort: 8161/TCP > > Endpoints: *10.1.0.30*:8161,*10.1.0.31*:8161 > > Port: all 61616/TCP > > TargetPort: 61616/TCP > > Endpoints: *10.1.0.30*:61616,*10.1.0.31*:61616 > > Session Affinity: None > > Events: <none> > > > > And here is the description the other 2 services: > > kubectl describe service ex-aao-wconsj-0-svc -n myproject > > Name: ex-aao-wconsj-0-svc > > Namespace: myproject > > Labels: ActiveMQArtemis=ex-aao > > application=ex-aao-app > > Annotations: <none> > > Selector: ActiveMQArtemis=ex-aao,application=ex-aao-app, > > statefulset.kubernetes.io/pod-name=ex-aao-ss-0 > > Type: ClusterIP > > IP Family Policy: SingleStack > > IP Families: IPv4 > > IP: *10.96.183.20* > > IPs: *10.96.183.20* > > Port: wconsj-0 8161/TCP > > TargetPort: 8161/TCP > > Endpoints: *10.1.0.30*:8161 > > Session Affinity: None > > Events: <none> > > > > kubectl describe service ex-aao-wconsj-1-svc -n myproject > > Name: ex-aao-wconsj-1-svc > > Namespace: myproject > > Labels: ActiveMQArtemis=ex-aao > > application=ex-aao-app > > Annotations: <none> > > Selector: ActiveMQArtemis=ex-aao,application=ex-aao-app, > > statefulset.kubernetes.io/pod-name=ex-aao-ss-1 > > Type: ClusterIP > > IP Family Policy: SingleStack > > IP Families: IPv4 > > IP: *10.98.233.91* > > IPs: *10.98.233.91* > > Port: wconsj-1 8161/TCP > > TargetPort: 8161/TCP > > Endpoints: *10.1.0.31*:8161 > > Session Affinity: None > > Events: <none> > > > > The 2 pods hosting the broker nodes are ex-aao-ss0 and ex-aao-ss1: > > kubectl get all -o wide -n myproject > > NAME READY STATUS > RESTARTS > > AGE IP > > pod/activemq-artemis-operator-bb9cf6567-qjdzs 1/1 Running 0 > > 46h 10.1.0.6 > > pod/debug 1/1 Running 0 > > 162m 10.1.0.29 > > pod/ex-aao-ss-0 1/1 Running 0 > > 155m *10.1.0.30* > > pod/ex-aao-ss-1 1/1 Running 0 > > 154m *10.1.0.31* > > > > Hence, from another pod in the same cluster I can access the web console > : > > curl -L http://*ex-aao-hdls-svc*:8161, so I should be able to port > forward > > using this service instead of the pod: > > C:\Users\nle>kubectl port-forward service/ex-aao-hdls-svc 8161:8161 -n > > myproject > > Forwarding from 127.0.0.1:8161 -> 8161 > > Forwarding from [::1]:8161 -> 8161 > > > > However, hitting http://localhost:8161 from my desktop still give the > same > > error: > > > > Handling connection for 8161 > > Handling connection for 8161 > > E0818 14:51:30.135226 18024 portforward.go:400] an error occurred > > forwarding 8161 -> 8161: error forwarding port 8161 to pod > > ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid : > > exit status 1: 2021/08/18 18:51:26 socat[1906] E connect(17, AF=2 > > 127.0.0.1:8161, 16): Connection refused > > E0818 14:51:30.136855 18024 portforward.go:400] an error occurred > > forwarding 8161 -> 8161: error forwarding port 8161 to pod > > ff6c35be7d3ace906d726b6abd38dbe122ddf2530647a44c799ca9a8a15ab245, uid : > > exit status 1: 2021/08/18 18:51:26 socat[1907] E connect(17, AF=2 > > 127.0.0.1:8161, 16): Connection refused > > > > Do you have any other suggestion? > > > > Thai Le > > > > > > On Wed, Aug 18, 2021 at 2:10 PM Justin Bertram <jbert...@apache.org> > > wrote: > > > > > If the embedded web server which serves the console (as configured in > > > bootstrap.xml) is bound to localhost then it will never be accessible > > from > > > a remote machine. You need to bind it to an IP or hostname which is > > > externally accessible. > > > > > > > > > Justin > > > > > > On Tue, Aug 17, 2021 at 2:58 PM Thai Le <lnthai2...@gmail.com> wrote: > > > > > > > Hello, > > > > > > > > I am not sure if questions regarding Artemis cloud can be asked here > > but > > > > since i found no mailing list for artemis cloud and the slack channel > > > needs > > > > an invitation to join I'm gonna try my luck here. > > > > > > > > I installed the Artemis operator and an ActiveMQArtemis with a > > deployment > > > > plan of 2 brokers on my single node kubernetes (docker-desktop), here > > is > > > > the deployment: > > > > > > > > apiVersion: broker.amq.io/v2alpha5 > > > > kind: ActiveMQArtemis > > > > metadata: > > > > name: ex-aao > > > > spec: > > > > adminUser: brokerAdmin > > > > adminPassword: verySecret > > > > deploymentPlan: > > > > size: 2 > > > > image: placeholder > > > > podSecurity: > > > > runAsUser: 0 > > > > console: > > > > expose: true > > > > sslEnabled: false > > > > > > > > The 2 brokers are running and I can curl the web console from another > > pod > > > > in the same kubernetes cluster. However, I cannot access the web > > console > > > > from my desktop (http://localhost:8161/console). I also tried to > port > > > > forward requests to port 8161 from my desktop to one of the 2 artemis > > > pods > > > > but it does not work either. > > > > > > > > I would appreciate it if anyone could give me a hint as to what may > be > > > > wrong or a direction to artemis cloud mailing list > > > > > > > > Thai Le > > > > > > > > > > > > > -- > > Where there is will, there is a way > > > -- Where there is will, there is a way