https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113572
Jakub Jelinek <jakub at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rsandifo at gcc dot gnu.org --- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> --- vect_all_same before r14-5880-g31e9074977bb7de83fa5d28d286323987d5d87f2 wouldn't ICE on this, that did int nelts = vector_cst_encoded_nelts (v); tree first_el = VECTOR_CST_ENCODED_ELT (v, 0); for (i = 0; i < nelts; i += step) if (!operand_equal_p (VECTOR_CST_ENCODED_ELT (v, i), first_el, 0)) return false; and so i was never higher than vector_cst_encoded_nelts, but the current generalized vector_cst_all_same doesn't guarantee that: unsigned int lcm = least_common_multiple (step, VECTOR_CST_NPATTERNS (v)); unsigned int nelts = lcm * VECTOR_CST_NELTS_PER_PATTERN (v); tree first_el = VECTOR_CST_ENCODED_ELT (v, 0); for (unsigned int i = 0; i < nelts; i += step) if (!operand_equal_p (VECTOR_CST_ENCODED_ELT (v, i), first_el, 0)) return false; because least_common_multiple of something and VECTOR_CST_NPATTERNS will often be higher than VECTOR_CST_NPATTERNS (unless VECTOR_CST_NPATTERNS is a multiple of step). So, if that part is right, I think we want to use VECTOR_CST_ELT instead of VECTOR_CST_ENCODED_ELT, like: --- gcc/config/aarch64/aarch64-sve-builtins.cc.jj 2024-01-12 13:47:20.815429012 +0100 +++ gcc/config/aarch64/aarch64-sve-builtins.cc 2024-01-24 20:58:33.720677634 +0100 @@ -3474,7 +3474,7 @@ vector_cst_all_same (tree v, unsigned in unsigned int nelts = lcm * VECTOR_CST_NELTS_PER_PATTERN (v); tree first_el = VECTOR_CST_ENCODED_ELT (v, 0); for (unsigned int i = 0; i < nelts; i += step) - if (!operand_equal_p (VECTOR_CST_ENCODED_ELT (v, i), first_el, 0)) + if (!operand_equal_p (VECTOR_CST_ELT (v, i), first_el, 0)) return false; return true; which fixes the ICE. On the testcase, VECTOR_CST_NELTS_PER_PATTERN is 2, VECTOR_CST_NPATTERNS is 1 and step is 8, so lcm is 8 and nelts 16, while the vector has only 2 encoded elts.