Eric Blake <ebb9 <at> byu.net> writes: > > > > On the other hand, POSIX states for sigaction() that the handler's "third > > argument can be cast to a pointer to an object of type ucontext_t to refer to > > the receiving thread’s context that was interrupted when the signal was > > delivered."
> | > |> The receiving thread may currently be in the alternate stack, but > |> the signal was delivered when the thread was still in the primary > stack. So > |> maybe we should file this as a bug with the kernel folks, and hope that a > |> future version of Linux change uc_stack to supply the information we > want. > |> After all, we don't need uc_stack to learn about the current > (alternate) stack; > |> we can use sigaltstack() for that. > | > | I think you are misunderstanding things, and there is no Linux kernel > bug here. > > Where would I even go to ask a Linux developer this question? I've tested on another platform. On Solaris 8, uc_stack is (IMHO properly) pointing to the primary stack, even though the handler is executing on the alternate stack. So on Solaris, the existing c-stack module correctly distinguishes between stack overflow and true SEGV, and there is no need for any mincore()-like API for determining which memory is mapped. But what was weird on Solaris - when running the program natively, the stack overflow is detected. But when running under gdb 5.3 (yes, the machine's copy of gdb is 6 years old), even though the ucontext_t is properly set, the fact that the program is run as an inferior and gdb is intercepting signals means that the siginfo_t structure reported a si_code of 0 and lost the si_addr information; and so the inferior program reported program error rather than stack overflow. Probably a gdb bug, and hopefully one that has been fixed in the meantime. If Linux wants to follow Solaris' lead, then it's looking more and more like a Linux kernel bug that uc_stack isn't useful. Again, anyone know how to report something like that? -- Eric Blake