Hi Steffan, On 06/10/2020 03:51 PM, mailingl...@tikklik.nl wrote:
Just noticed your script is not completely correct Total number of mounts: 28169 is not counting as it should be.
Yes, i've already written it in my previous mail, there is a mistake in the script. ================================================= > On 06/09/2020 01:32 PM, Konstantin Khorenko wrote: > > Total number of mounts: 28169 > > He-he, there is a mistake in the script - "Total number of mounts" prints sum of pids. > But ok, the real sum of mounts is 10000+. =================================================
But when looking at the mounts there is something strange Every pidhas all mountpoint twice cat /proc/1009/mounts | grep web1440 /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,gr pjquota=aquota.group 0 0 /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 rw,relatime,data=ordered,balloon_ino=12,jqfmt=vfsv1,usrjquota=aquota.user,gr pjquota=aquota.group 0 0
Are you sure really *every* mount entry is duplicated? If really *all* mounts are duplicated, then it's strange. But i guess not all of them are duplicated. > /dev/ploop21699p1 /var/www/clients/client449/web1440/log ext4 ... This is a bindmount, so i think someone just did 2 bindmounts to the same place, that's it. May be you've started the software 2 times and first time it was not gracefully shutdowned and did not unmount "old" mounts, i've no idea honestly. Or may be someone did on CT start something like "mount -o bind / /", and after that every mount in a CT will be shown twice due to propagation. Example: [root@localhost ~]# mount -o bind / / [root@localhost ~]# cat /proc/self/mountinfo | grep ploop 153 62 182:503793 / / rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 264 153 182:503793 / / rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 [root@localhost ~]# mount -o bind /var/log /mnt [root@localhost ~]# cat /proc/self/mountinfo | grep ploop 153 62 182:503793 / / rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 264 153 182:503793 / / rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 269 153 182:503793 /var/log /mnt rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 270 264 182:503793 /var/log /mnt rw,relatime shared:41 master:38 - ext4 /dev/ploop31487p1 rw,data=ordered,balloon_ino=12 Currently i do not see any sign the issue is virtualization related, most probably if you make same setup on a Hardware Node, you'll get the same. (If you think it's virtualization related and we have some bug which does not trigger on vz6 but triggers on vz7 - just migrate old Containers from vz6 to vz7 and see the number of mounts.) -- Best regards, Konstantin Khorenko, Virtuozzo Linux Kernel Team
Steffan -----Oorspronkelijk bericht----- Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens Konstantin Khorenko Verzonden: dinsdag 9 juni 2020 12:32 Aan: OpenVZ users <users@openvz.org> Onderwerp: Re: [Users] openvz 7> centos 8 container On 06/09/2020 12:19 PM, mailingl...@tikklik.nl wrote:Hello Konstantin, 1: this is a centos 6 container lsns is not therei don't expect namespaces in centos 6 Containers, i think there is only one, but you can verify it: # for i in /proc/[0-9]*/ns/mnt; do readlink $i; done | uniq Just check the number of lines in output. And how many mounts in CentOS 6 Container? (Assuming only 1 mount namespace, just cat /proc/mounts | wc -l)2: PID: 1, # of mounts: 600, cmdline: init-z PID: 707, # of mounts: 1199, cmdline: /usr/lib/systemd/systemd-udevd PID: 724, # of mounts: 1205, cmdline: /usr/sbin/NetworkManager--no-daemon PID: 11063, # of mounts: 1201, cmdline:/usr/sbin/httpd-DFOREGROUNDPID: 10410, # of mounts: 1203, cmdline: /usr/libexec/postfix/master-w PID: 1118, # of mounts: 1201, cmdline: /usr/libexec/mysqld--basedir=/usr PID: 1029, # of mounts: 1201, cmdline: php-fpm: master process (/etc/opt/remi/php73/php-fpm.conf) PID: 1037, # of mounts: 1201, cmdline: php-fpm: master process (/etc/opt/remi/php71/php-fpm.conf) PID: 1039, # of mounts: 1201, cmdline: php-fpm: master process (/etc/opt/remi/php56/php-fpm.conf) PID: 1041, # of mounts: 1202, cmdline: php-fpm: master process (/etc/php-fpm.conf)Total number of mounts: 28169He-he, there is a mistake in the script - "Total number of mounts" prints sum of pids. :) But ok, the real sum of mounts is 10000+. -- Konstantin-----Oorspronkelijk bericht----- Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens Konstantin Khorenko Verzonden: dinsdag 9 juni 2020 10:53 Aan: OpenVZ users <users@openvz.org> Onderwerp: Re: [Users] openvz 7> centos 8 container On 06/09/2020 09:42 AM, mailingl...@tikklik.nl wrote:Is this on a openvz6 a different setting.That's strange, the mount limit presents in vz6 as well. At some point we faced the situation when some Container stop took enormous amount of time, we found out that there was a software insidewhich "leaked"mounts, but this does not matter, it means any "bad guy" can create a lot of mounts and start/stop Containers affecting other Containers on the same node (global locks taken - namespace_sem, vfsmount_lock). Thus we've implemented the precaution limit for mounts. Can you check the total number of mounts on 1) vz6 ("old" server running in centos7 Container?) and 2) vz7 ("new" server running in centos8 Container?) # export total=0; for i in `lsns | grep mnt | awk -e '{print $4;}'`; do echo -en "PID: $i,\t# of mounts: "; echo -n `cat /proc/$i/mounts | wc -l`; echo -en ",\tcmdline: "; cat /proc/$i/cmdline; echo ""; total=$((total + $i)); done; echo "Total number of mounts: $total" Thank you. -- KonstantinThe old server is now running on a centos 7 vps -----Oorspronkelijk bericht----- Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens Konstantin Khorenko Verzonden: maandag 8 juni 2020 23:07 Aan: OpenVZ users <users@openvz.org> Onderwerp: Re: [Users] openvz 7> centos 8 container On 06/08/2020 09:15 PM, mailingl...@tikklik.nl wrote:If 4096 is the default Then i dont get it why this error is there Its 'only' 597 mount | wc -l 597Most probably you have mount namespaces with more mounts inside.Best regards, Steffan -----Oorspronkelijk bericht----- Van: users-boun...@openvz.org <users-boun...@openvz.org> Namens Konstantin Khorenko Verzonden: maandag 8 juni 2020 17:45 Aan: OpenVZ users <users@openvz.org> Onderwerp: Re: [Users] openvz 7> centos 8 container On 06/08/2020 03:31 PM, mailingl...@tikklik.nl wrote:I now see on my node: kernel: CT#402 reached the limit on mounts.You can increase the limit of mounts inside a Container via sysctl "fs.ve-mount-nr" (4096 by default). Warning: stopping of a Container with many mounts inside can take quite a long time. Say, if you have 200000 of mounts in a Container, Container stop may take ~10 minutes. -- Best regards, Konstantin Khorenko, Virtuozzo Linux Kernel TeamSo think that is the problem. I see a old toppic onlinhttps://forum.openvz.org/index.php?t=rview&th=12902&goto=52002Any idee if that is the solution that is needed today? *Van:* users-boun...@openvz.org <users-boun...@openvz.org> *Namens*mailingl...@tikklik.nl*Verzonden:* maandag 8 juni 2020 14:17 *Aan:* 'OpenVZ users' <users@openvz.org> *Onderwerp:* [Users] openvz 7> centos 8 container Hello, If installed a centos 8 op-envz container It was working, but after migration my data from an older container imkeep getting errors like this:php71-php-fpm.service: Failed to set up mount namespacing: Cannotallocate memoryphp71-php-fpm.service: Failed at step NAMESPACE spawning/opt/remi/php71/root/usr/sbin/php-fpm: Cannot allocate memoryphp73-php-fpm.service: Failed to set up mount namespacing: Cannotallocate memoryphp73-php-fpm.service: Failed at step NAMESPACE spawning/opt/remi/php73/root/usr/sbin/php-fpm: Cannot allocate memoryhttpd.service: Failed to set up mount namespacing: Cannot allocatememoryhttpd.service: Failed at step NAMESPACE spawning /usr/sbin/httpd:Cannot allocate memorycat /proc/user_beancounters Version: 2.5 resource held maxheldbarrier limit failcntkmemsize 92078080 1219379209223372036854775807 9223372036854775807 0lockedpages 0 09223372036854775807 9223372036854775807 0privvmpages 52155 758579223372036854775807 9223372036854775807 0shmpages 659 26369223372036854775807 9223372036854775807 0dummy 0 09223372036854775807 9223372036854775807 0numproc 39 394194304 4194304 0physpages 97697 1119649223372036854775807 9223372036854775807 0vmguarpages 0 09223372036854775807 9223372036854775807 0oomguarpages 97697 1119640 0 0numtcpsock 0 09223372036854775807 9223372036854775807 0numflock 2 59223372036854775807 9223372036854775807 0numpty 0 19223372036854775807 9223372036854775807 0numsiginfo 0 579223372036854775807 9223372036854775807 0tcpsndbuf 0 09223372036854775807 9223372036854775807 0tcprcvbuf 0 09223372036854775807 9223372036854775807 0othersockbuf 0 09223372036854775807 9223372036854775807 0dgramrcvbuf 0 09223372036854775807 9223372036854775807 0numothersock 0 09223372036854775807 9223372036854775807 0dcachesize 51408896 727982089223372036854775807 9223372036854775807 0numfile 711 9959223372036854775807 9223372036854775807 0dummy 0 09223372036854775807 9223372036854775807 0dummy 0 09223372036854775807 9223372036854775807 0dummy 0 09223372036854775807 9223372036854775807 0numiptent 8 169223372036854775807 9223372036854775807 0uname -r 3.10.0-1062.12.1.vz7.131.10 any idees what went wrong and how to repair? Thanxs Steffan _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users_______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users ._______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users ._______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users ._______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users _______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users .
_______________________________________________ Users mailing list Users@openvz.org https://lists.openvz.org/mailman/listinfo/users