ping
Testing shows the setting of 32:16 for jump alignment has a significant codesize cost, however it doesn't make a difference in performance. So set jump-align to 4 to get 1.6% codesize improvement. OK for commit? ChangeLog 2019-12-24 Wilco Dijkstra <wdijk...@arm.com> * config/aarch64/aarch64.c (neoversen1_tunings): Set jump_align to 4. -- diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 1646ed1d9a3de8ee2f0abff385a1ea145e234475..209ed8ebbe81104d9d8cff0df31946ab7704fb33 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -1132,7 +1132,7 @@ static const struct tune_params neoversen1_tunings = 3, /* issue_rate */ (AARCH64_FUSE_AES_AESMC | AARCH64_FUSE_CMP_BRANCH), /* fusible_ops */ "32:16", /* function_align. */ - "32:16", /* jump_align. */ + "4", /* jump_align. */ "32:16", /* loop_align. */ 2, /* int_reassoc_width. */ 4, /* fp_reassoc_width. */