https://llvm.org/bugs/show_bug.cgi?id=28663
Bug ID: 28663 Summary: sspstrong and sspreq use generate incorrect frame layout with alloca and VLAs Product: libraries Version: trunk Hardware: All OS: All Status: NEW Severity: normal Priority: P Component: Common Code Generator Code Assignee: unassignedb...@nondot.org Reporter: danielmi...@gmail.com CC: llvm-bugs@lists.llvm.org Classification: Unclassified Created attachment 16793 --> https://llvm.org/bugs/attachment.cgi?id=16793&action=edit [PATCH] stop short-circuiting the SSP code for sspstrong The StackProtector::RequiresStackProtector method is supposed to add layout information for alloca instructions that need to be protected by the canary. This is supposed to protect normal local variables (including function pointers, etc.) from linear overflows. However, this method contains an early return for sspstrong and sspreq in the code for handling calls to alloca and variable length arrays (not regular arrays, with the IR Clang generates): // SSP-Strong: Enable protectors for any call to alloca, regardless // of size. if (Strong) return true; The method has special handling for sspstrong/sspreq following this early return, but it's not being used. It ends up returning early, resulting in the function being protected with a canary but without marking the arrays it's trying to protect (not only the alloca/VLA triggering the issue) so they get treated as normal local variables. I've attached a patch removing this early return. Example of how the code output changes (at -O0): #include <string.h> #include <alloca.h> int foo(char *bar) { char *buf = alloca(20); strcpy(buf, bar); return strlen(buf); } --- old_x86.s 2016-07-22 08:44:37.534862251 -0400 +++ new_x86.s 2016-07-22 08:44:18.778486803 -0400 @@ -17,12 +17,12 @@ subq $48, %rsp movq %fs:40, %rax movq %rax, -8(%rbp) - movq %rdi, -24(%rbp) - leaq -44(%rbp), %rdi - movq %rdi, -16(%rbp) - movq -24(%rbp), %rsi + movq %rdi, -48(%rbp) + leaq -28(%rbp), %rdi + movq %rdi, -40(%rbp) + movq -48(%rbp), %rsi callq strcpy - movq -16(%rbp), %rdi + movq -40(%rbp), %rdi callq strlen movq %fs:40, %rcx cmpq -8(%rbp), %rcx -- You are receiving this mail because: You are on the CC list for the bug.
_______________________________________________ llvm-bugs mailing list llvm-bugs@lists.llvm.org http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs