Yes, it's not something an application would expect to need to keep increasing, 
just that's the part of <limits.h> I'd move it to. The definition could also be 
the max required by a processor family, with sysconf() reporting a possible 
lower value for a particular processor stepping. At least that way the 
application that doesn't use sysconf() won't be getting SIGSEGV faults.

Additionally, I believe the definition can be calculated at compile time as a 
multiple of ( sizeof(ucontext_t)+sizeof(overhead_struct(s)) ), whatever other 
overhead applies, so I don't see any real need to use sysconf(). This may mean 
having to munge a <signal.in> by configure, based on config.guess, but that's 
not the standard's headache.


The CS, SC, and PC constants are not in the XSH 2.2.2 table deliberately, from 
Issue 6 TC1, as adding any also requires a bump in POSIX_VERSION or 
POSIX2_VERSION, and often XSI_VERSION. This is so each usage of a constant 
doesn't need individual #ifdefs to test option group availability. The previous 
text was allowing if an implementation wasn't supporting an option group they 
could skip including the related constants in <unistd.h>. A simple check of 
VERSION at the top of a source C file suffices now to indicate those constants 
shall be available.
On Tuesday, March 9, 2021 Eric Blake <ebl...@redhat.com> wrote:
On 3/9/21 10:14 AM, shwaresyst wrote:
> 
> To me that looks like a conformance violation and should be reverted. There 
> is no _SC_SIGSTKSZ defined in <unistd.h> by the standard, to begin with, so 
> that use of sysconf() is a non-portable extension on its own.

Portable apps can't use _SC_SIGSTKSZ, but the standard generally permits
implementations to define further constants.  Then again, re-reading XSH
2.2.2:

" Implementations may add symbols to the headers shown in the following
table, provided the identifiers for those symbols either:

    Begin with the corresponding reserved prefixes in the table, or
..."

but the table lacks a row for <unistd.h> with _CS_* and _SC_* constants.
 Looks like you found an independent defect.

> 
> I could see the definition of SIGSTKSZ being changed to the static minimum a 
> particular processor requires, or is initially allocated as a 'safe' amount, 
> rather than static "default size", and moving SIGSTKSZ to <limits.h>. This 
> would contrast to MINSIGSTKSZ as the lowest value for a platform for all 
> supported processors. Then an application could use sysconf() to query for 
> the maximum size the configuration supports if it wants to use more than 
> that, as a runtime increasable limit.

As I understand it, the concern in glibc is less about runtime
increasability, so much as ABI compatibility with applications compiled
against older headers at a time when the kernel had less state
information to store during a context switch.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.          +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Reply via email to