On Thu, Sep 04, 2025 at 02:44:03PM -0700, Andrii Nakryiko wrote: > On Thu, Sep 4, 2025 at 1:52 PM Peter Zijlstra <pet...@infradead.org> wrote: > > > > On Thu, Sep 04, 2025 at 01:49:49PM -0700, Andrii Nakryiko wrote: > > > On Thu, Sep 4, 2025 at 1:35 PM Peter Zijlstra <pet...@infradead.org> > > > wrote: > > > > > > > > On Thu, Sep 04, 2025 at 11:27:45AM -0700, Andrii Nakryiko wrote: > > > > > > > > > > > So I've been thinking what's the simplest and most reliable way to > > > > > > > feature-detect support for this sys_uprobe (e.g., for libbpf to > > > > > > > know > > > > > > > whether we should attach at nop5 vs nop1), and clearly that would > > > > > > > be > > > > > > > > > > > > wrt nop5/nop1.. so the idea is to have USDT macro emit both > > > > > > nop1,nop5 > > > > > > and store some info about that in the usdt's elf note, right? > > > > > > > > Wait, what? You're doing to emit 6 bytes and two nops? Why? Surely the > > > > old kernel can INT3 on top of a NOP5? > > > > > > > > > > Yes it can, but it's 2x slower in terms of uprobe triggering compared > > > to nop1. > > > > Why? That doesn't really make sense. > > > > Of course it's silly... It's because nop5 wasn't recognized as one of > the emulated instructions, so was handled through single-stepping.
*groan* > > I realize its probably to late to fix the old kernel not to be stupid -- > > this must be something stupid, right? But now I need to know. > > Jiri fixed this, but as you said, too late for old kernels. See [0] > for the patch that landed not so long ago. > > [0] https://lore.kernel.org/all/20250414083647.1234007-1-jo...@kernel.org/ Ooh, that suggests we do something like so: diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 0a8c0a4a5423..223f8925097b 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -309,6 +309,29 @@ static int uprobe_init_insn(struct arch_uprobe *auprobe, struct insn *insn, bool return -ENOTSUPP; } +static bool insn_is_nop(struct insn *insn) +{ + return insn->opcode.nbytes == 1 && insn->opcode.bytes[0] == 0x90; +} + +static bool insn_is_nopl(struct insn *insn) +{ + if (insn->opcode.nbytes != 2) + return false; + + if (insn->opcode.bytes[0] != 0x0f || insn->opcode.bytes[1] != 0x1f) + return false; + + if (!insn->modrm.nbytes) + return false; + + if (X86_MODRM_REG(insn->modrm.bytes[0]) != 0) + return false; + + /* 0f 1f /0 - NOPL */ + return true; +} + #ifdef CONFIG_X86_64 struct uretprobe_syscall_args { @@ -1158,29 +1181,6 @@ void arch_uprobe_optimize(struct arch_uprobe *auprobe, unsigned long vaddr) mmap_write_unlock(mm); } -static bool insn_is_nop(struct insn *insn) -{ - return insn->opcode.nbytes == 1 && insn->opcode.bytes[0] == 0x90; -} - -static bool insn_is_nopl(struct insn *insn) -{ - if (insn->opcode.nbytes != 2) - return false; - - if (insn->opcode.bytes[0] != 0x0f || insn->opcode.bytes[1] != 0x1f) - return false; - - if (!insn->modrm.nbytes) - return false; - - if (X86_MODRM_REG(insn->modrm.bytes[0]) != 0) - return false; - - /* 0f 1f /0 - NOPL */ - return true; -} - static bool can_optimize(struct insn *insn, unsigned long vaddr) { if (!insn->x86_64 || insn->length != 5) @@ -1428,17 +1428,13 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) insn_byte_t p; int i; - /* x86_nops[insn->length]; same as jmp with .offs = 0 */ - if (insn->length <= ASM_NOP_MAX && - !memcmp(insn->kaddr, x86_nops[insn->length], insn->length)) + if (insn_is_nop(insn) || insn_is_nopl(insn)) goto setup; switch (opc1) { case 0xeb: /* jmp 8 */ case 0xe9: /* jmp 32 */ break; - case 0x90: /* prefix* + nop; same as jmp with .offs = 0 */ - goto setup; case 0xe8: /* call relative */ branch_clear_offset(auprobe, insn);