Nicholas Piggin wrote:
Excerpts from Naveen N. Rao's message of April 27, 2021 11:59 pm:
Nicholas Piggin wrote:
+ *
+ * H_CONFER from spin locks must be treated separately though and use _notrace
+ * plpar_hcall variants, see yield_to_preempted().
  */
 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);

@@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, 
unsigned long *args)

        depth = this_cpu_ptr(&hcall_trace_depth);

-       if (*depth)
+       if (WARN_ON_ONCE(*depth))
                goto out;

I don't think this will be helpful. The hcall trace depth tracking is for the tracepoint and I suspect that this warning will be triggered quite easily. Since we have recursion protection, I don't think we should warn here.

What would trigger recursion?

The trace code that this protects: trace_hcall_entry(). The tracing code itself can end up doing a hcall as we see in the first patch in this series:
 plpar_hcall_norets_trace+0x34/0x8c (unreliable)
 __pv_queued_spin_lock_slowpath+0x684/0x710
 trace_clock_global+0x148/0x150
 ring_buffer_lock_reserve+0x12c/0x630
 trace_event_buffer_lock_reserve+0x80/0x220
 trace_event_buffer_reserve+0x7c/0xd0
 trace_event_raw_event_hcall_entry+0x68/0x150
 __trace_hcall_entry+0x160/0x180


There is also a comment aroung hcall_trace_depth that mentions this:

 /*
  * Since the tracing code might execute hcalls we need to guard against
  * recursion. One example of this are spinlocks calling H_YIELD on
  * shared processor partitions.
  */


Thanks,
Naveen

Reply via email to