On Tue, Mar 9, 2021 at 1:53 PM shwaresyst via Libc-alpha <libc-al...@sourceware.org> wrote: > > > Yes, it's not something an application would expect to need to keep > increasing, just that's the part of <limits.h> I'd move it to. The definition > could also be the max required by a processor family, with sysconf() > reporting a possible lower value for a particular processor stepping. At > least that way the application that doesn't use sysconf() won't be getting > SIGSEGV faults. > > Additionally, I believe the definition can be calculated at compile time as a > multiple of ( sizeof(ucontext_t)+sizeof(overhead_struct(s)) ), whatever other > overhead applies, so I don't see any real need to use sysconf(). This may > mean having to munge a <signal.in> by configure, based on config.guess, but > that's not the standard's headache. >
At compile time, we don't know what the minimum signal stack size is at run-time, especially 10 years from now. -- H.J.