From: Jesper Dangaard Brouer
Date: Thu, 12 Oct 2017 14:27:05 +0200
> @@ -355,7 +360,10 @@ struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32
> qsize, u32 cpu, int map_id)
> err = ptr_ring_init(rcpu->queue, qsize, gfp);
> if (err)
> goto free_queue;
> - rcpu->qsize
This adds two tracepoint to the cpumap. One for the enqueue side
trace_xdp_cpumap_enqueue() and one for the kthread dequeue side
trace_xdp_cpumap_kthread().
To mitigate the tracepoint overhead, these are invoked during the
enqueue/dequeue bulking phases, thus amortizing the cost.
The obvious use