On 28/05/2025 21.46, Mina Almasry wrote:
On Wed, May 28, 2025 at 2:28 AM Toke Høiland-Jørgensen <t...@redhat.com> wrote:
Mina Almasry <almasrym...@google.com> writes:
On Mon, May 26, 2025 at 5:51 AM Toke Høiland-Jørgensen <t...@redhat.com> wrote:
Fast path results:
no-softirq-page_pool01 Per elem: 11 cycles(tsc) 4.368 ns
ptr_ring results:
no-softirq-page_pool02 Per elem: 527 cycles(tsc) 195.187 ns
slow path results:
no-softirq-page_pool03 Per elem: 549 cycles(tsc) 203.466 ns
```
Cc: Jesper Dangaard Brouer <h...@kernel.org>
Cc: Ilias Apalodimas <ilias.apalodi...@linaro.org>
Cc: Jakub Kicinski <k...@kernel.org>
Cc: Toke Høiland-Jørgensen <t...@toke.dk>
Signed-off-by: Mina Almasry <almasrym...@google.com>
Back when you posted the first RFC, Jesper and I chatted about ways to
avoid the ugly "load module and read the output from dmesg" interface to
the test.
I agree the existing interface is ugly.
One idea we came up with was to make the module include only the "inner"
functions for the benchmark, and expose those to BPF as kfuncs. Then the
test runner can be a BPF program that runs the tests, collects the data
and passes it to userspace via maps or a ringbuffer or something. That's
a nicer and more customisable interface than the printk output. And if
they're small enough, maybe we could even include the functions into the
page_pool code itself, instead of in a separate benchmark module?
WDYT of that idea? :)
...but this sounds like an enormous amount of effort, for something
that is a bit ugly but isn't THAT bad. Especially for me, I'm not that
much of an expert that I know how to implement what you're referring
to off the top of my head. I normally am open to spending time but
this is not that high on my todolist and I have limited bandwidth to
resolve this :(
I also feel that this is something that could be improved post merge.
I think it's very beneficial to have this merged in some form that can
be improved later. Byungchul is making a lot of changes to these mm
things and it would be nice to have an easy way to run the benchmark
in tree and maybe even get automated results from nipa. If we could
agree on mvp that is appropriate to merge without too much scope creep
that would be ideal from my side at least.
Right, fair. I guess we can merge it as-is, and then investigate whether
we can move it to BPF-based (or maybe 'perf bench' - Cc acme) later :)
Thanks for the pliability. Reviewed-bys and comments welcome.
Additionally Signed-off-by from Jesper is needed I think. Since most
of this code is his, I retained his authorship. Jesper, whenever this
looks good to me, a signed-off-by would be good and I would carry it
to future versions. Changing authorship to me is also fine by me but I
would think you want to retain the credit.
Okay, I think Ilias'es comment[1] and ACK convinced me, let us merge
this as-is. We have been asking people to run it over several years
before accepting patches. We shouldn't be pointing people to use
out-of-tree tests for accepting patches.
It is not perfect, but it have served us well for benchmarking in the
last approx 10 years (5 years for page_pool test). It is isolated as a
selftest under (tools/testing/selftests/net/bench/page_pool/).
Realistically we are all too busy inventing a new "perfect" benchmark
for page_pool. That said, I do encourage others with free cycles to
integrated a better benchmark test into `perf bench`. Then we can just
remove this module again.
Signed-off-by: Jesper Dangaard Brouer <h...@kernel.org>
[1]
https://lore.kernel.org/all/cac_iwjlmo4xz_+pbacnxpvctmgknbslgyeeks2ptrrepn1u...@mail.gmail.com/
Thanks Mina for pushing this forward,
--Jesper