W dniu 23.12.2021 o 12:12, raf pisze:
> On Thu, Dec 23, 2021 at 09:52:05AM +0100, natan <na...@epf.pl> wrote:
>
>> W dniu 23.12.2021 o 01:53, raf pisze:
>>> On Wed, Dec 22, 2021 at 11:25:10AM +0100, natan <na...@epf.pl> wrote:
>>>
>>>> W dniu 21.12.2021 o 18:15, Wietse Venema pisze:
>>>> 10.x.x.10 - is gallera klaster wirth 3 nodes (and max_con set to 1500
>>>> for any nodes)
>>>>
>>>> when I get this eror I check number of connections
>>>>
>>>> smtpd : 125
>>>>
>>>> smtp      inet  n       -       -       -       1       postscreen
>>>> smtpd     pass  -       -       -       -       -       smtpd -o
>>>> receive_override_options=no_address_mappings
>>>>
>>>> and total: amavis+lmtp-dovecot+smtpd-o
>>>> receive_override_options=no_address_mappings : 335
>>>> from: ps -e|grep smtpd |wc -l
>>>>
>>>>>> but:
>>>>>> for local lmt port:10025 - 5 connection
>>>>>> for incomming from amavis port: 10027- 132 connections
>>>>>> smtpd - 60 connections (
>>>>>> ps -e|grep smtpd - 196 connections
>>>>> 1) You show two smtpd process counts. What we need are the
>>>>> internet-related smtpd processes counts.
>>>>>
>>>>> 2) Network traffic is not constant. What we need are process counts
>>>>> at the time that postscreen logs the warnings.
>>>>>
>>>>>>> 2) Your kernel cannot support the default_process_limit of 1200.
>>>>>>> In that case a higher default_process_limit would not help. Instead,
>>>>>>> kernel configuration or more memory (or both) would help.
>>>>>> 5486 ?        Ss     6:05 /usr/lib/postfix/sbin/master
>>>>>> cat /proc/5486/limits
>>>>> Those are PER-PROCESS resource limits. I just verified that postscreen
>>>>> does not run into the "Max open files" limit of 4096 as it tries
>>>>> to hand off a connection, because that would result in an EMFILE
>>>>> (Too many open files) kernel error code.
>>>>>
>>>>> Additionally there are SYSTEM-WIDE limits for how much the KERNEL
>>>>> can handle. These are worth looking at when you're trying to handle
>>>>> big traffic on a small (virtual) machine. 
>>>>>
>>>>>   Wietse
>>>> How I check ?
>>> Googling "linux system wide resource limits" shows a
>>> lot of things including
>>> https://www.tecmint.com/increase-set-open-file-limits-in-linux/
>>> which mentions sysctl, /etc/sysctl.conf, ulimit, and
>>> /etc/security/limits.conf.
>>>
>>> Then I realised that the problem is with process limits,
>>> not open file limits, but the same methods apply.
>>>
>>> On my VM, the hard and soft process limits are 3681:
>>>
>>>   # ulimit -Hu
>>>   3681
>>>   # ulimit -Su
>>>   3681
>>>
>>> Perhaps yours is less than that.
>>>
>>> To change it permanently, add something like the
>>> following to /etc/security/limits.conf (or to a file in
>>> /etc/security/limits.d/):
>>>
>>>   * hard nproc 4096
>>>   * soft nproc 4096
>>>
>>> Note that this is assuming Linux, and assuming that your
>>> server will be OK with increasing the process limit. That
>>> might not be the case if it's a tiny VM being asked to
>>> do too much. Good luck.
>>>
>>> cheers,
>>> raf
>>>
>> Raf I have:
>> #ulimit -Hu
>> 257577
>> # ulimit -Su
>> 257577
>>
>> 7343 ?        Rs    24:22 /usr/lib/postfix/sbin/master
>>
>> # cat /proc/7343/limits
>> Limit                     Soft Limit           Hard Limit          
>> Units    
>> Max cpu time              unlimited            unlimited           
>> seconds  
>> Max file size             unlimited            unlimited           
>> bytes    
>> Max data size             unlimited            unlimited           
>> bytes    
>> Max stack size            8388608              unlimited           
>> bytes    
>> Max core file size        0                    unlimited           
>> bytes    
>> Max resident set          unlimited            unlimited           
>> bytes    
>> Max processes             257577               257577              
>> processes
>> Max open files            4096                 4096                
>> files    
>> Max locked memory         65536                65536               
>> bytes    
>> Max address space         unlimited            unlimited           
>> bytes    
>> Max file locks            unlimited            unlimited           
>> locks    
>> Max pending signals       257577               257577              
>> signals  
>> Max msgqueue size         819200               819200              
>> bytes    
>> Max nice priority         0                    0                   
>> Max realtime priority     0                    0                   
>> Max realtime timeout      unlimited            unlimited           
>> us       
>>
>> this is real limits for /usr/lib/postfix/sbin/master
>> --
> That looks like it should be plenty of processes,
> as long as the server can really support that many.
>
> You could test it with something like this:
>
>       #!/usr/bin/env perl
>       use warnings;
>       use strict;
>       my $max_nprocs = 8000;
>       my $i = 0;
>       while ($i < $max_nprocs)
>       {
>               $i++;
>               my $pid = fork();
>               die "fork #$i failed: $!\n" unless defined $pid;
>               sleep(10), exit(0) if $pid == 0;
>       }
>       print "$i forks succeeded\n";
>
> For example, a VM here reports 7752 for ulimit -Su,
> but the above script failed on the 3470th fork.
>
> cheers,
> raf
>
in machine with postfix

time ./1.py
12000 forks succeeded

real    0m1,365s
user    0m0,088s
sys    0m1,276s

--

Reply via email to