Jakub Jelinek <ja...@redhat.com> writes: > + /* Can't use get_compute_type here, as supportable_convert_operation > + doesn't necessarily use an optab and needs two arguments. */ > + tree vector_compute_type > + = type_for_widest_vector_mode (TREE_TYPE (arg_type), mov_optab); > + unsigned HOST_WIDE_INT nelts; > + if (vector_compute_type > + && VECTOR_MODE_P (TYPE_MODE (vector_compute_type)) > + && subparts_gt (arg_type, vector_compute_type) > + && TYPE_VECTOR_SUBPARTS (vector_compute_type).is_constant (&nelts)) > + { > + while (nelts > 1) > + { > + tree ret1_type = build_vector_type (TREE_TYPE (ret_type), nelts); > + tree arg1_type = build_vector_type (TREE_TYPE (arg_type), nelts); > + if (supportable_convert_operation (code, ret1_type, arg1_type, > + &decl, &code1)) > + { > + new_rhs = expand_vector_piecewise (gsi, do_vec_conversion, > + ret_type, arg1_type, arg, > + decl, code1); > + g = gimple_build_assign (lhs, new_rhs); > + gsi_replace (gsi, g, false); > + return; > + } > + nelts = nelts / 2; > + } > + }
I think for this it would be better to use: if (vector_compute_type && VECTOR_MODE_P (TYPE_MODE (vector_compute_type)) && subparts_gt (arg_type, vector_compute_type)) { unsigned HOST_WIDE_INT nelts = constant_lower_bound (TYPE_VECTOR_SUBPARTS (vector_compute_type)); since the loop is self-checking. E.g. this will make the Advanced SIMD handling on AArch64 the same regardless of whether SVE is also enabled. Thanks, Richard