On Thu, 2025-04-03 at 12:19 -0700, Nathan Chancellor wrote: > > Thanks, I applied that change, which shows a slightly different crash > message now:
Pretty sure it's all just a bug in my inline assembly, and clang allocates registers differently: #define ___backtrack_faulted(_faulted) \ asm volatile ( \ "mov $0, %0\n" \ "movq $__get_kernel_nofault_faulted_%=,%1\n" \ "jmp _end_%=\n" \ "__get_kernel_nofault_faulted_%=:\n" \ "mov $1, %0;" \ "_end_%=:" \ : "=r" (_faulted), \ "=m" (current->thread.segv_continue) :: \ ) It _looks_ as though both %0 and %1 are output only, but clang compiles it to: 51: 48 83 fb 08 cmp $0x8,%rbx 55: 72 44 jb 9b <_end_0+0x2a> 57: 48 8b 01 mov (%rcx),%rax // start inline assembly ---vvv--- // 5a: b8 00 00 00 00 mov $0x0,%eax 5f: 48 c7 80 90 07 00 00 movq $0x0,0x790(%rax) // crash 66: 00 00 00 00 66: R_X86_64_32S .text+0x6c 6a: eb 05 jmp 71 <_end_0> 000000000000006c <__get_kernel_nofault_faulted_0>: 6c: b8 01 00 00 00 mov $0x1,%eax // end inline assembly ---^^^--- // 0000000000000071 <_end_0>: 71: 85 c0 test %eax,%eax 73: 75 56 jne cb <_end_1+0x10> which clearly cannot work? I must be missing something. Switching the first two instructions fixes it, of course, but right now I can't see what I forgot in terms of constraints to make the compiler not do that. Probably trivial to someone more familiar with inline assembly. Modifying the _faulted to be +r instead of =r also fixes it. johannes