VLA is a known issue for a long time.
GCC doesn't have too much cse optimization for
VLA vectors. It should be a big work to investigate what's going on.
I think most cse optimization for precomputed result are vls loop. So I think as long as we can do a good job on cost model which pick appropriate vls loop. It's not big issue and not high priority for me.
---- Replied Message ----
From | Robin Dapp<rdapp....@gmail.com> |
Date | 01/12/2024 18:10 |
To | Juzhe-Zhong<juzhe.zh...@rivai.ai>, gcc-patches@gcc.gnu.org<gcc-patches@gcc.gnu.org> |
Cc | rdapp....@gmail.com<rdapp....@gmail.com>, kito.ch...@gmail.com<kito.ch...@gmail.com>, kito.ch...@sifive.com<kito.ch...@sifive.com>, jeffreya...@gmail.com<jeffreya...@gmail.com> |
Subject | Re: [PATCH V3] RISC-V: Adjust scalar_to_vec cost |
> Tested on both RV32/RV64 no regression, Ok for trunk ?
Yes, thanks!
Btw out of curiosity, did you see why we actually fail to
optimize away the VLA loop? We should open a bug for that
I suppose.
Regards
Robin
Yes, thanks!
Btw out of curiosity, did you see why we actually fail to
optimize away the VLA loop? We should open a bug for that
I suppose.
Regards
Robin