On 12/9/2015 1:07 PM, Steven Rostedt wrote:
On Wed,  9 Dec 2015 09:29:20 -0800
Yang Shi <yang....@linaro.org> wrote:

Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: "H. Peter Anvin" <h...@zytor.com>
Cc: x...@kernel.org
Signed-off-by: Yang Shi <yang....@linaro.org>
---
  arch/x86/mm/gup.c | 7 +++++++
  1 file changed, 7 insertions(+)

diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index ae9a37b..a96bcb7 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -12,6 +12,9 @@

  #include <asm/pgtable.h>

+#define CREATE_TRACE_POINTS
+#include <trace/events/gup.h>>

First off, does the above even compile?

Second, you already created the tracepoints in mm/gup.c, why are you
creating them here again? CREATE_TRACE_POINTS must be defined only once
per events/.h file.

Sorry for that. The typo was introduced by git amend without running test build. The multiple definition of trace points was not caught by my shaky test script.

Will fix them soon.

Thanks,
Yang


-- Steve

+
  static inline pte_t gup_get_pte(pte_t *ptep)
  {
  #ifndef CONFIG_X86_PAE
@@ -270,6 +273,8 @@ int __get_user_pages_fast(unsigned long start, int 
nr_pages, int write,
                                        (void __user *)start, len)))
                return 0;

+       trace_gup_get_user_pages_fast(start, nr_pages);
+
        /*
         * XXX: batch / limit 'nr', to avoid large irq off latency
         * needs some instrumenting to determine the common sizes used by
@@ -373,6 +378,8 @@ int get_user_pages_fast(unsigned long start, int nr_pages, 
int write,
        } while (pgdp++, addr = next, addr != end);
        local_irq_enable();

+       trace_gup_get_user_pages_fast(start, nr_pages);
+
        VM_BUG_ON(nr != (end - start) >> PAGE_SHIFT);
        return nr;



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to