On Fri, 30 Aug 2024 12:15:36 GMT, Per Minborg <pminb...@openjdk.org> wrote:
>> @minborg Hi! I didn't checked the numbers with the benchmark I've written at >> https://github.com/openjdk/jdk/pull/20712#discussion_r1732802685 which is >> meant to stress the branch predictor (without enough `samples` i.e. past >> 128K on my machine) - can you give it a shot with M1 🙏 ? > > @franz1981 Here is what I get if I run your performance test on my M1 Mac > (unfortunately no -perf data): > > > Base > Benchmark (samples) (shuffle) Mode Cnt > Score Error Units > TestBranchFill.heap_segment_fill 1024 false avgt 30 > 58597.625 ? 1871.313 ns/op > TestBranchFill.heap_segment_fill 1024 true avgt 30 > 64309.859 ? 1164.360 ns/op > TestBranchFill.heap_segment_fill 128000 false avgt 30 > 7136796.445 ? 152120.060 ns/op > TestBranchFill.heap_segment_fill 128000 true avgt 30 > 7908474.120 ? 49184.950 ns/op > > > Patch > Benchmark (samples) (shuffle) Mode Cnt > Score Error Units > TestBranchFill.heap_segment_fill 1024 false avgt 30 > 3695.815 ? 24.615 ns/op > TestBranchFill.heap_segment_fill 1024 true avgt 30 > 3938.582 ? 124.510 ns/op > TestBranchFill.heap_segment_fill 128000 false avgt 30 > 420845.301 ? 1605.080 ns/op > TestBranchFill.heap_segment_fill 128000 true avgt 30 > 1778362.506 ? 39250.756 ns/op > Thanks @minborg to run it, so it seems that 128K, despite the additional call > (due to not inlining something), makes nuking the pipeline of M1 a severe > affair: If I understand correctly, this benchmark attempts to call `fill` with different segment sizes (in a loop) - correct? It's understandable that, in this case, we can't optimize as well, because we have different branches which get taken or not in a less predictable fashion. The important question is (for this PR): does the work proposed here cause a _regression_ in the case you have in mind? E.g. is the `setMemory` intrinsics better than the branchy logic we have here? ------------- PR Comment: https://git.openjdk.org/jdk/pull/20712#issuecomment-2321596145