2016.Június 2.(Cs) 21:39 időpontban "Tóth Attila" ezt írta:
> 2016.Június 2.(Cs) 02:31 időpontban Max R.D. Parmer ezt írta:
>> Not necessarily the ideal solution, but have you tried twiddling with
>> the stack size in limits.conf?
>
> I checked an the system-wide defaults apply to systemd, which is: 8M soft
> limit and _unlimited_ hard limit for stack size. So after exceeding soft
> limit systemd segfaults and tries to dump core.
>
> cat /proc/1/limits
> Limit                     Soft Limit           Hard Limit           Units
> Max stack size            8388608              unlimited            bytes
>
> I expect any process to handle a situation of trying to exceed soft limit
> with unlimited hard limit in another way than segfaulting and attempting
> to dump core...

Increasing the limit doesn't fix the issue - I'm not surprised about that.

For those who are not familiar: systemd doesn't respect limits.conf. In
system.conf the default values can be configured and per unit limits can
be specified. To my surprise, systemd doesn't seem to pay attention to
it's own configuration file. In order to provide increased stack limit for
init, I also modified the kernel defaults. With no success.

>> If I read this right, grsec limits the size of the stack, which causes
>> the process to segfault.
>
> I think grsec does not enforce any stack limits here, it just reports the
> issue and makes it more visible.

I did a bisect and it turns out a this commit is responsible for the
symptoms:
https://github.com/systemd/systemd/commit/d054f0a4d451120c26494263fc4dc175bfd405b1
tree-wide: use xsprintf() where applicable

I try to contact the developer. Whether he has an idea on what is
happening here.

BR: Dw.
-- 
dr Tóth Attila, Radiológus, 06-20-825-8057
Attila Toth MD, Radiologist, +36-20-825-8057


Reply via email to