Hi.
I try to run dataplane as "random" user inside haproxy.cfg.
That's the debug output of the start of the container. Even as I have set the
--log-level=trace to the dataplane can't I see any reason why the api kills the
process.
```
# Debug output with dataplane api
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:51:49_CET
/datadisk/container-haproxy $ sudo buildah bud --tag craftcms-hap .
STEP 1/4: FROM haproxytech/haproxy-ubuntu:2.9
STEP 2/4: COPY container-files/ /
STEP 3/4: RUN set -x && mkdir -p /data/haproxy/etc /data/haproxy/run
/data/haproxy/maps /data/haproxy/ssl /data/haproxy/general
/data/haproxy/spoe && chown -R1001:0 /data && chmod -R g=u /data && touch
/data/haproxy/etc/dataplaneapi.yaml
+ mkdir -p /data/haproxy/etc /data/haproxy/run /data/haproxy/maps
/data/haproxy/ssl /data/haproxy/general /data/haproxy/spoe
+ chown -R 1001:0 /data
+ chmod -R g=u /data
+ touch /data/haproxy/etc/dataplaneapi.yaml
STEP 4/4: USER 1001
COMMIT craftcms-hap
Getting image source signatures
Copying blob d101c9453715 skipped: already exists
Copying blob 5c32e8ef5ef0 skipped: already exists
Copying blob 5bbbd68c0c20 skipped: already exists
Copying blob 2f5b49454406 [--------------------------------------] 0.0b / 0.0b
Copying blob 83d27970fa5a [--------------------------------------] 0.0b / 0.0b
Copying blob 5a567c1d5233 done
Copying config 1ac0ae6824 done
Writing manifest to image destination
Storing signatures
--> 1ac0ae6824c
Successfully tagged localhost/craftcms-hap:latest
1ac0ae6824c91a9bc4fa1f19979c0b9dc672981fb82949429006d53252f8de9c
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:21_CET
/datadisk/container-haproxy $ sudo podman run -it --rm --network host --name
haproxy craftcms-hap haproxy -f /data/haproxy/etc/haproxy.cfg -d
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
[NOTICE] (1) : New program 'api' (3) forked
[NOTICE] (1) : New worker (4) forked
[NOTICE] (1) : Loading success.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
time="2024-03-12T22:54:24Z" level=info msg="HAProxy Data Plane API v2.9.1
4d10854c"
time="2024-03-12T22:54:24Z" level=info msg="Build from:
https://github.com/haproxytech/dataplaneapi.git"
time="2024-03-12T22:54:24Z" level=info msg="Reload strategy: custom"
time="2024-03-12T22:54:24Z" level=info msg="Build date: 2024-02-26T18:06:06Z"
00000000:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=<none>
00000000:GLOBAL.clicls[ffff:ffff]
00000000:GLOBAL.srvcls[ffff:ffff]
00000000:GLOBAL.closed[ffff:ffff]
00000001:GLOBAL.accept(0008)=0039 from [unix:1] ALPN=<none>
00000001:GLOBAL.clicls[ffff:ffff]
00000001:GLOBAL.srvcls[ffff:ffff]
00000001:GLOBAL.closed[ffff:ffff]
[NOTICE] (1) : haproxy version is 2.9.6-9eafce5
[NOTICE] (1) : path to executable is /usr/local/sbin/haproxy
[ALERT] (1) : Current program 'api' (3) exited with code 1 (Exit) #< Why exit
[ALERT] (1) : exit-on-failure: killing every processes with SIGTERM
[ALERT] (1) : Current worker (4) exited with code 143 (Terminated)
[WARNING] (1) : All workers exited. Exiting... (1)
alex@alex-tuxedoinfinitybooks1517gen7 on 12/03/2024 at 23:54:24_CET
/datadisk/container-haproxy $
```
When I start HAProxy without the lines in the Block "program api" HAProxy is
able to start. After I connect with another shell to the container and run the
dataplane inside the container can I see that dataplane connects to haproxy and
stops immediately.
# shell 1
```
sudo podman run -it --rm --network host --name haproxy craftcms-hap haproxy -f
/data/haproxy/etc/haproxy.cfg -d
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
[NOTICE] (1) : New worker (3) forked
[NOTICE] (1) : Loading success.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available filters :
[BWLIM] bwlim-in
[BWLIM] bwlim-out
[CACHE] cache
[COMP] compression
[FCGI] fcgi-app
[SPOE] spoe
[TRACE] trace
Using epoll() as the polling mechanism.
00000000:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=<none>
00000000:GLOBAL.clicls[ffff:ffff]
00000000:GLOBAL.srvcls[ffff:ffff]
00000000:GLOBAL.closed[ffff:ffff]
00000001:GLOBAL.accept(0008)=0038 from [unix:1] ALPN=<none>
00000001:GLOBAL.clicls[ffff:ffff]
00000001:GLOBAL.srvcls[ffff:ffff]
00000001:GLOBAL.closed[ffff:ffff]
```
# shell 2
```
1001@alex-tuxedoinfinitybooks1517gen7:/$ alex@alex-tuxedoinfinitybooks1517gen7
on 12/03/2024 at 23:51:49_CET ~ $ sudo podman exec -it haproxy /bin/bash
1001@alex-tuxedoinfinitybooks1517gen7:/$ /usr/bin/dataplaneapi
-f=/data/haproxy/etc/dataplaneapi.yaml --log-to=stdout --log-level=trace
--spoe-dir=/data/haproxy/spoe --maps-dir=/data/haproxy/maps
--ssl-certs-dir=/data/haproxy/ssl --general-storage-dir=/data/haproxy/general
--host 0.0.0.0 --port 5555 --haproxy-bin /usr/sbin/haproxy --config-file
/data/haproxy/etc/haproxy.cfg --reload-cmd "kill -SIGUSR2 1" --restart-cmd "kill
-SIGUSR2 1" --reload-delay 5 --userlist haproxy-dataplaneapi
--socket-path=/data/haproxy/run/data-plane.sock
time="2024-03-12T23:02:16Z" level=info msg="Build from:
https://github.com/haproxytech/dataplaneapi.git"
time="2024-03-12T23:02:16Z" level=info msg="Build date: 2024-02-26T18:06:06Z"
time="2024-03-12T23:02:16Z" level=info msg="HAProxy Data Plane API v2.9.1
4d10854c"
time="2024-03-12T23:02:16Z" level=info msg="Reload strategy: custom"
1001@alex-tuxedoinfinitybooks1517gen7:/$
```
Is there any chance to get more information's from the dataplane why it it's
exitig?
Attached the Dockerfile and the haproxy.cfg which I use.
Thanks for any help.
Best regards
Alex
FROM haproxytech/haproxy-ubuntu:2.9
COPY container-files/ /
RUN set -x \
&& mkdir -p /data/haproxy/etc \
/data/haproxy/run \
/data/haproxy/maps \
/data/haproxy/ssl \
/data/haproxy/general \
/data/haproxy/spoe \
&& chown -R 1001:0 /data \
&& chmod -R g=u /data \
&& touch /data/haproxy/etc/dataplaneapi.yaml
USER 1001
# copy from
#
https://raw.githubusercontent.com/haproxytech/haproxy-docker-ubuntu/main/2.9/haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# https://www.haproxy.org/download/2.9/doc/configuration.txt
# https://cbonte.github.io/haproxy-dconv/2.9/configuration.html
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
log stdout format raw daemon debug
pidfile /data/haproxy/run/haproxy.pid
maxconn 4000
# turn on stats unix socket
stats socket /data/haproxy/run/stats mode 600 expose-fd listeners level
admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
#---------------------------------------------------------------------
# example how to define user and enable Data Plane API on tcp/5555
# more information: https://github.com/haproxytech/dataplaneapi and
#
https://www.haproxy.com/documentation/hapee/2-0r1/configuration/dataplaneapi/
#---------------------------------------------------------------------
userlist haproxy-dataplaneapi
user admin insecure-password mypassword
##
#program api
# command /usr/bin/dataplaneapi -f=/data/haproxy/etc/dataplaneapi.yaml
--log-to=stdout --log-level=trace --spoe-dir=/data/haproxy/spoe
--maps-dir=/data/haproxy/maps --ssl-certs-dir=/data/haproxy/ssl
--general-storage-dir=/data/haproxy/general --host 0.0.0.0 --port 5555
--haproxy-bin /usr/sbin/haproxy --config-file /data/haproxy/etc/haproxy.cfg
--reload-cmd "kill -SIGUSR2 1" --restart-cmd "kill -SIGUSR2 1" --reload-delay 5
--userlist haproxy-dataplaneapi --socket-path=/data/haproxy/run/data-plane.sock
# no option start-on-reload
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend craft-cms
bind *:8080
acl url_static path_beg -i /static /images /javascript
/stylesheets
acl url_static path_end -i .jpg .gif .png .css .js
use_backend static if url_static
default_backend app
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
balance roundrobin
# server static1 127.0.0.1:4331 check
# server static2 127.0.0.1:4332 check
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
balance roundrobin
# server app1 127.0.0.1:5001 check
# server app2 127.0.0.1:5002 check
# server app3 127.0.0.1:5003 check
# server app4 127.0.0.1:5004 check