On Tue, Apr 23, 2019 at 12:26:52AM +0800, Kairui Song wrote:
> Currently perf callchain doesn't work well with ORC unwinder
> when sampling from trace point. We'll get useless in kernel callchain
> like this:
> 
> perf  6429 [000]    22.498450:             kmem:mm_page_alloc: page=0x176a17 
> pfn=1534487 order=0 migratetype=0 gfp_flags=GFP_KERNEL
>     ffffffffbe23e32e __alloc_pages_nodemask+0x22e 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>       7efdf7f7d3e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
>       5651468729c1 [unknown] (/usr/bin/perf)
>       5651467ee82a main+0x69a (/usr/bin/perf)
>       7efdf7eaf413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
>     5541f689495641d7 [unknown] ([unknown])
> 
> The root cause is that, for trace point events, it doesn't provide a
> real snapshot of the hardware registers. Instead perf tries to get
> required caller's registers and compose a fake register snapshot
> which suppose to contain enough information for start a unwinding.
> However without CONFIG_FRAME_POINTER, if failed to get caller's BP as the
> frame pointer, so current frame pointer is returned instead. We get
> a invalid register combination which confuse the unwinder, and end the
> stacktrace early.
> 
> So in such case just don't try dump BP, and let the unwinder start
> directly when the register is not a real snapshot. And Use SP
> as the skip mark, unwinder will skip all the frames until it meet
> the frame of the trace point caller.
> 
> Tested with frame pointer unwinder and ORC unwinder, this make perf
> callchain get the full kernel space stacktrace again like this:
> 
> perf  6503 [000]  1567.570191:             kmem:mm_page_alloc: page=0x16c904 
> pfn=1493252 order=0 migratetype=0 gfp_flags=GFP_KERNEL
>     ffffffffb523e2ae __alloc_pages_nodemask+0x22e 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb52383bd __get_free_pages+0xd 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb52fd28a __pollwait+0x8a (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb521426f perf_poll+0x2f (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb52fe3e2 do_sys_poll+0x252 (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb52ff027 __x64_sys_poll+0x37 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb500418b do_syscall_64+0x5b 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>     ffffffffb5a0008c entry_SYSCALL_64_after_hwframe+0x44 
> (/lib/modules/5.1.0-rc3+/build/vmlinux)
>       7f71e92d03e8 __poll+0x18 (/usr/lib64/libc-2.28.so)
>       55a22960d9c1 [unknown] (/usr/bin/perf)
>       55a22958982a main+0x69a (/usr/bin/perf)
>       7f71e9202413 __libc_start_main+0xf3 (/usr/lib64/libc-2.28.so)
>     5541f689495641d7 [unknown] ([unknown])
> 
> Co-developed-by: Josh Poimboeuf <jpoim...@redhat.com>
> Signed-off-by: Kairui Song <kas...@redhat.com>

Thanks!

> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
> index e47ef764f613..ab135abe62e0 100644
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -1059,7 +1059,7 @@ static inline void perf_arch_fetch_caller_regs(struct 
> pt_regs *regs, unsigned lo
>   * the nth caller. We only need a few of the regs:
>   * - ip for PERF_SAMPLE_IP
>   * - cs for user_mode() tests
> - * - bp for callchains
> + * - sp for callchains
>   * - eflags, for future purposes, just in case
>   */
>  static inline void perf_fetch_caller_regs(struct pt_regs *regs)

I've extended that like so:

--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1058,12 +1058,18 @@ static inline void perf_arch_fetch_calle
 #endif
 
 /*
- * Take a snapshot of the regs. Skip ip and frame pointer to
- * the nth caller. We only need a few of the regs:
+ * When generating a perf sample in-line, instead of from an interrupt /
+ * exception, we lack a pt_regs. This is typically used from software events
+ * like: SW_CONTEXT_SWITCHES, SW_MIGRATIONS and the tie-in with tracepoints.
+ *
+ * We typically don't need a full set, but (for x86) do require:
  * - ip for PERF_SAMPLE_IP
  * - cs for user_mode() tests
- * - sp for callchains
- * - eflags, for future purposes, just in case
+ * - sp for PERF_SAMPLE_CALLCHAIN
+ * - eflags for MISC bits and CALLCHAIN (see: perf_hw_regs())
+ *
+ * NOTE: assumes @regs is otherwise already 0 filled; this is important for
+ * things like PERF_SAMPLE_REGS_INTR.
  */
 static inline void perf_fetch_caller_regs(struct pt_regs *regs)
 {

Reply via email to