Hi, For stack protector to be robust, at no point in time the guard against which the canari is compared must be spilled to the stack. This is achieved by having dedicated insn pattern for setting the canari and comparing it against the guard which doesn't reflect at RTL what is happening. However computing the address of the guard is done using standard movsi pattern and can thus be spilled (see PR85434). I'm reaching out to the community for ideas on how to avoid this.
Spilling is more likely in the context of PIC where the address is loaded from the GOT and is thus not rematerialized. Likewise CSE make things worse by reusing the address computed in the prologue to set the canari later in the epilogue when performing the check, thereby being sensitive to peak register pressure in the function. Aarch64 and I believe x86 backends don't have the CSE issue for PIC because the GOT access is represented by an outer UNSPEC whereas arm backend represent it by a MEM of an UNSPEC. I think all targets are prone to issues in theory because the scheduler could reorder the GOT access away from the guard/canari comparison which would make a spill possible. So far my feeling is that we would need new patterns that also cover the guard's address computation but that would make such a pattern quite complex to cover all the possible address. Thoughts? Best regards, Thomas