2016.Június 2.(Cs) 02:31 időpontban Max R.D. Parmer ezt írta:
> On Wed, Jun 1, 2016, at 15:49, Tóth Attila wrote:
>> I've just had an unsuccessful attempt to upgrade to systemd-230-r1. It
>> segfaults and slows the system down. The symptoms are better compared to
>> -229, but still significant.
>>
>> https://forums.grsecurity.net/viewtopic.php?f=3&t=4485
>>
>> Some relevant log entries:
>> grsec: denied resource overstep by requesting 8392704 for RLIMIT_STACK
>> against limit 8388608 for /usr/lib64/systemd/systemd[systemd:2735]
>> uid/euid:0/0 gid/egid:0/0, parent /usr/lib64/systemd/systemd[systemd:1]
>> uid/euid:0/0 gid/egid:0/0
>> systemd[2735]: segfault at 39f8d01cf00 ip 00000368d4caa2e4 sp
>> 0000039f8d01cf00 error 6 in libc-2.23.so[368d4c62000+19a000]
>> grsec: Segmentation fault occurred at 0000039f8d01cf00 in
>> /usr/lib64/systemd/systemd[systemd:2735] uid/euid:0/0 gid/egid:0/0,
>> parent
>> /usr/lib64/systemd/systemd[systemd:1] uid/euid:0/0 gid/egid:0/0
>> grsec: bruteforce prevention initiated for the next 30 minutes or until
>> service restarted, stalling each fork 30 seconds.  Please investigate
>> the
>> crash report for /usr/lib64/systemd/systemd[systemd:2735] uid/euid:0/0
>> gid/egid:0/0, parent /usr/lib64/systemd/systemd[systemd:1] uid/euid:0/0
>> gid/egid:0/0
>>
>> systemd-coredump[2747]: Process 2735 (systemd) of user 0 dumped core.
>>
>>                                                    Stack trace of thread
>> 2735:
>>                                                    #0
>> 0x00000368d4caa2e4
>> _IO_vfprintf
>> (libc.so.6)
>>                                                    #1
>> 0x00000368d4d5e852
>> __vsnprintf_chk
>> (libc.so.6)
>>                                                    #2
>> 0x00000368d4d5e7a4
>> __snprintf_chk
>> (libc.so.6)
>>                                                    #3
>> 0x00000000df8db344
>> n/a (systemd)
>>                                                    #4
>> 0x00000000df8db9aa
>> n/a (systemd)
>
> Not necessarily the ideal solution, but have you tried twiddling with
> the stack size in limits.conf?

I checked an the system-wide defaults apply to systemd, which is: 8M soft
limit and _unlimited_ hard limit for stack size. So after exceeding soft
limit systemd segfaults and tries to dump core.

cat /proc/1/limits
Limit                     Soft Limit           Hard Limit           Units
Max stack size            8388608              unlimited            bytes

I expect any process to handle a situation of trying to exceed soft limit
with unlimited hard limit in another way than segfaulting and attempting
to dump core...

> If I read this right, grsec limits the size of the stack, which causes
> the process to segfault.

I think grsec does not enforce any stack limits here, it just reports the
issue and makes it more visible.

BR: Dw.
-- 
dr Tóth Attila, Radiológus, 06-20-825-8057
Attila Toth MD, Radiologist, +36-20-825-8057


Reply via email to