@ahasenack I just added the two test cases you mentioned in comment #54 to the Test Plan.
** Description changed: [ Impact ] * On mantic and noble, when run as root, podman cannot stop any container running in background because crun is being run with a new profile introduced in AppArmor v4.0.0 that doesn't have corresponding signal receive rule container's profile. * Without the fix, users would have to resort to figuring out container's PID 1 and killing it as root or by other privileged and unconfined process. This is a regression from basic podman functionality. * The fix adds signal receive rules for currently confined OCI runtimes in AppArmor v4.0.0 (runc and crun) to the profile used by podman. [ Test Plan ] All commands must be invoked as root. Run tests below with both crun and runc OCI runtimes. For crun, nothing has to be changed (it's installed and used by default). For runc, first install the runc pakcage, and then insert "--runtime /usr/sbin/runc" arguments after "podman run". Start container in background and then stop it: # Run container in background (-d) podman run -d --name foo docker.io/library/nginx:latest # Stop the container podman stop foo On success, the last command should print the container name and the container running in background should be stopped (verify with "podman ps"). Additional tests: Verify that container running in foreground TTY can be stopped. # Terminal 1: # Run container on this TTY podman run -it --name bar --rm docker.io/library/ubuntu:22.04 # Terminal 2: # Stop the container podman stop bar On success, the last command should print the container name, the process running in terminal 1 should stop, and the container should be removed (verify with "podman ps -a"). Verify that container running with dumb init can be killed. # Run container in background (-d) with dumb init podman run -d --name bar --rm --init ubuntu:22.04 sleep infinity # Stop the container podman stop bar On success, the last command should print the container name and the container running in background should be stopped and removed (verify with "podman ps -a"). Verify container processes can signal each other # Run container in foreground with processes sending signals between themselves podman run ubuntu:22.04 sh -c 'sleep inf & sleep 1 ; kill $!' On success, the last command should exit after cca 1 second with exit status 0. + Verify the AppArmor profile contains the -apparmor1 suffix + + Once you have a running container you should be able to find the + AppArmor profile name in dmesg. It has to contain the -apparmor1 suffix + in it. + + Verify that podman was included in the reboot required notification if + there are running containers + + Run a container before upgrading the podman package, after the upgrade + containing the fix is done, podman must be included in reboot required + notification: + + root@podman-reboot-test:~# podman run -d -e "POSTGRES_HOST_AUTH_METHOD=trust" docker.io/library/postgres + 0de2c6e3b9b3454fed43fc6cdd006942e821615d89f062ae5b3727d16ccc439b + root@podman-reboot-test:~# podman ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + 0de2c6e3b9b3 docker.io/library/postgres:latest postgres 3 minutes ago Up 3 minutes optimistic_austin + root@podman-reboot-test:~# dpkg -l | grep podman + ii podman 4.9.3+ds1-1ubuntu0.1 amd64 tool to manage containers and pods + + [ Upgrade podman to the fixed version ] + + root@podman-reboot-test:~# apt install ./podman_4.9.3+ds1-1ubuntu0.2_amd64.deb + [...] + + [ Check whether the podman binary package name is present in + /var/run/reboot-required.pkgs ] + + root@podman-reboot-test:~# cat /var/run/reboot-required.pkgs | grep podman + podman + [ Where problems could occur ] * The fix requires a rebuild of podman that will pull in any other changes in the archive since the last build, which could potentially break some functionality. [ Other Info ] Note for the SRU team: let's make sure to first accept golang-github- containers-common and, once it is built and published, we can accept libpod. Otherwise, the fix will not be applied to podman. - [ Original report ] Mantic's system podman containers are completely broken due to bug 2040082. However, after fixing that (rebuilding with the patch, or a *shht don't try this at home* hack [1]), the AppArmor policy still causes bugs: podman run -it --rm docker.io/busybox Then podman stop -l fails with 2023-10-25T11:06:33.873998Z: send signal to pidfd: Permission denied and journal shows audit: type=1400 audit(1698231993.870:92): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4713 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="/usr/bin/crun" This leaves the container in a broken state: # podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 61749260f9c4 docker.io/library/busybox:latest sh 40 seconds ago Exited (-1) 29 seconds ago confident_bouman # podman rm --all 2023-10-25T11:07:21.428701Z: send signal to pidfd: Permission denied Error: cleaning up container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae: removing container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae from runtime: `/usr/bin/crun delete --force 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae` failed: exit status 1 audit: type=1400 audit(1698232041.422:93): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4839 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="/usr/bin/crun" [1] sed -i 's/~alpha2/0000000/' /usr/sbin/apparmor_parser Ubuntu 23.10 ii apparmor 4.0.0~alpha2-0ubuntu5 amd64 user-space parser utility for AppArmor ii golang-github-containers-common 0.50.1+ds1-4 all Common files for github.com/containers repositories ii podman 4.3.1+ds1-8 amd64 engine to run OCI-based containers in Pods ** Description changed: [ Impact ] * On mantic and noble, when run as root, podman cannot stop any container running in background because crun is being run with a new profile introduced in AppArmor v4.0.0 that doesn't have corresponding signal receive rule container's profile. * Without the fix, users would have to resort to figuring out container's PID 1 and killing it as root or by other privileged and unconfined process. This is a regression from basic podman functionality. * The fix adds signal receive rules for currently confined OCI runtimes in AppArmor v4.0.0 (runc and crun) to the profile used by podman. [ Test Plan ] All commands must be invoked as root. Run tests below with both crun and runc OCI runtimes. For crun, nothing has to be changed (it's installed and used by default). For runc, first install the runc pakcage, and then insert "--runtime /usr/sbin/runc" arguments after "podman run". Start container in background and then stop it: # Run container in background (-d) podman run -d --name foo docker.io/library/nginx:latest # Stop the container podman stop foo On success, the last command should print the container name and the container running in background should be stopped (verify with "podman ps"). Additional tests: Verify that container running in foreground TTY can be stopped. # Terminal 1: # Run container on this TTY podman run -it --name bar --rm docker.io/library/ubuntu:22.04 # Terminal 2: # Stop the container podman stop bar On success, the last command should print the container name, the process running in terminal 1 should stop, and the container should be removed (verify with "podman ps -a"). Verify that container running with dumb init can be killed. # Run container in background (-d) with dumb init podman run -d --name bar --rm --init ubuntu:22.04 sleep infinity # Stop the container podman stop bar On success, the last command should print the container name and the container running in background should be stopped and removed (verify with "podman ps -a"). Verify container processes can signal each other # Run container in foreground with processes sending signals between themselves podman run ubuntu:22.04 sh -c 'sleep inf & sleep 1 ; kill $!' On success, the last command should exit after cca 1 second with exit status 0. Verify the AppArmor profile contains the -apparmor1 suffix Once you have a running container you should be able to find the AppArmor profile name in dmesg. It has to contain the -apparmor1 suffix in it. Verify that podman was included in the reboot required notification if there are running containers Run a container before upgrading the podman package, after the upgrade containing the fix is done, podman must be included in reboot required notification: root@podman-reboot-test:~# podman run -d -e "POSTGRES_HOST_AUTH_METHOD=trust" docker.io/library/postgres 0de2c6e3b9b3454fed43fc6cdd006942e821615d89f062ae5b3727d16ccc439b root@podman-reboot-test:~# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0de2c6e3b9b3 docker.io/library/postgres:latest postgres 3 minutes ago Up 3 minutes optimistic_austin root@podman-reboot-test:~# dpkg -l | grep podman ii podman 4.9.3+ds1-1ubuntu0.1 amd64 tool to manage containers and pods - [ Upgrade podman to the fixed version ] + ## Upgrade podman to the fixed version ## root@podman-reboot-test:~# apt install ./podman_4.9.3+ds1-1ubuntu0.2_amd64.deb [...] - [ Check whether the podman binary package name is present in - /var/run/reboot-required.pkgs ] + ## Check whether the podman binary package name is present in + /var/run/reboot-required.pkgs ## root@podman-reboot-test:~# cat /var/run/reboot-required.pkgs | grep podman podman [ Where problems could occur ] * The fix requires a rebuild of podman that will pull in any other changes in the archive since the last build, which could potentially break some functionality. [ Other Info ] Note for the SRU team: let's make sure to first accept golang-github- containers-common and, once it is built and published, we can accept libpod. Otherwise, the fix will not be applied to podman. [ Original report ] Mantic's system podman containers are completely broken due to bug 2040082. However, after fixing that (rebuilding with the patch, or a *shht don't try this at home* hack [1]), the AppArmor policy still causes bugs: podman run -it --rm docker.io/busybox Then podman stop -l fails with 2023-10-25T11:06:33.873998Z: send signal to pidfd: Permission denied and journal shows audit: type=1400 audit(1698231993.870:92): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4713 comm="3" requested_mask="receive" denied_mask="receive" signal=term peer="/usr/bin/crun" This leaves the container in a broken state: # podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 61749260f9c4 docker.io/library/busybox:latest sh 40 seconds ago Exited (-1) 29 seconds ago confident_bouman # podman rm --all 2023-10-25T11:07:21.428701Z: send signal to pidfd: Permission denied Error: cleaning up container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae: removing container 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae from runtime: `/usr/bin/crun delete --force 61749260f9c4c96a51dc27fdd9cb8a86d80e4f2aa14eb7ed5b271791ff8008ae` failed: exit status 1 audit: type=1400 audit(1698232041.422:93): apparmor="DENIED" operation="signal" class="signal" profile="containers-default-0.50.1" pid=4839 comm="3" requested_mask="receive" denied_mask="receive" signal=kill peer="/usr/bin/crun" [1] sed -i 's/~alpha2/0000000/' /usr/sbin/apparmor_parser Ubuntu 23.10 ii apparmor 4.0.0~alpha2-0ubuntu5 amd64 user-space parser utility for AppArmor ii golang-github-containers-common 0.50.1+ds1-4 all Common files for github.com/containers repositories ii podman 4.3.1+ds1-8 amd64 engine to run OCI-based containers in Pods -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2040483 Title: AppArmor denies crun sending signals to containers (stop, kill) To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/golang-github-containers-common/+bug/2040483/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs