Steven Rostedt wrote:
On Tue, 18 Jun 2019 23:53:11 +0530
"Naveen N. Rao" <naveen.n....@linux.vnet.ibm.com> wrote:

Naveen N. Rao wrote:
> Steven Rostedt wrote: >> On Tue, 18 Jun 2019 20:17:04 +0530
>> "Naveen N. Rao" <naveen.n....@linux.vnet.ibm.com> wrote:
>> >>> @@ -1551,7 +1551,7 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end)
>>>    key.flags = end;        /* overload flags, as it is unsigned long */
>>> >>> for (pg = ftrace_pages_start; pg; pg = pg->next) {
>>> -          if (end < pg->records[0].ip ||
>>> + if (end <= pg->records[0].ip || >> >> This breaks the algorithm. "end" is inclusive. That is, if you look for
>> a single byte, where "start" and "end" are the same, and it happens to
>> be the first ip on the pg page, it will be skipped, and not found. > > Thanks. It looks like I should be over-riding ftrace_location() instead. > I will update this patch. I think I will have ftrace own the two instruction range, regardless of whether the preceding instruction is a 'mflr r0' or not. This simplifies things and I don't see an issue with it as of now. I will do more testing to confirm.

- Naveen


--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -951,6 +951,16 @@ void arch_ftrace_update_code(int command)
 }
#ifdef CONFIG_MPROFILE_KERNEL
+/*
+ * We consider two instructions -- 'mflr r0', 'bl _mcount' -- to be part
+ * of ftrace. When checking for the first instruction, we want to include
+ * the next instruction in the range check.
+ */
+unsigned long ftrace_location(unsigned long ip)
+{
+       return ftrace_location_range(ip, ip + MCOUNT_INSN_SIZE);
+}
+
 /* Returns 1 if we patched in the mflr */
 static int __ftrace_make_call_prep(struct dyn_ftrace *rec)
 {
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 21d8e201ee80..122e2bb4a739 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1573,7 +1573,7 @@ unsigned long ftrace_location_range(unsigned long start, 
unsigned long end)
  * the function tracer. It checks the ftrace internal tables to
  * determine if the address belongs or not.
  */
-unsigned long ftrace_location(unsigned long ip)
+unsigned long __weak ftrace_location(unsigned long ip)
 {
        return ftrace_location_range(ip, ip);
 }

Actually, instead of making this a weak function, let's do this:


In include/ftrace.h:

#ifndef FTRACE_IP_EXTENSION
# define FTRACE_IP_EXTENSION    0
#endif


In arch/powerpc/include/asm/ftrace.h

#define FTRACE_IP_EXTENSION     MCOUNT_INSN_SIZE


Then we can just have:

unsigned long ftrace_location(unsigned long ip)
{
        return ftrace_location_range(ip, ip + FTRACE_IP_EXTENSION);
}

Thanks, that's indeed nice. I hope you don't mind me adding your SOB for that.

- Naveen


Reply via email to