On 5/17/15 10:30 PM, He Kuang wrote:
Add new structure bpf_pt_regs, which contains both original
'ctx'(pt_regs) and trabe_probe pointer, and pass this new pointer to bpf
prog for variable fetching.

Signed-off-by: He Kuang <heku...@huawei.com>
---
  kernel/trace/trace_kprobe.c | 11 +++++++++--
  kernel/trace/trace_probe.h  |  5 +++++
  2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index d0ce590..cee0b28 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1141,8 +1141,15 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs 
*regs)
        int size, __size, dsize;
        int rctx;

-       if (prog && !trace_call_bpf(prog, regs))
-               return;
+       if (prog) {
+               struct bpf_pt_regs bpf_pt_regs;
+
+               bpf_pt_regs.pt_regs = *regs;
+               bpf_pt_regs.tp = &tk->tp;
...
+struct bpf_pt_regs {
+       struct pt_regs pt_regs;
+       struct trace_probe *tp;
+};

that is a massive overhead.
On x64 it means copying 168 bytes for every call.
imo that's a wrong trade off.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to