Hi, In function vectorizable_load, there is one hunk which is dedicated for the handlings on VMAT_INVARIANT and return early, it means we shouldn't encounter any cases with memory_access_type VMAT_INVARIANT in the following code after that. This patch is to clean up several useless checks on VMAT_INVARIANT. There should be no functional changes.
Bootstrapped and regtested on x86_64-redhat-linux, aarch64-linux-gnu and powerpc64{,le}-linux-gnu. gcc/ChangeLog: * tree-vect-stmts.cc (vectorizable_load): Remove some useless checks on VMAT_INVARIANT. --- gcc/tree-vect-stmts.cc | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/gcc/tree-vect-stmts.cc b/gcc/tree-vect-stmts.cc index 89607a98f99..d4e781531fd 100644 --- a/gcc/tree-vect-stmts.cc +++ b/gcc/tree-vect-stmts.cc @@ -10499,7 +10499,7 @@ vectorizable_load (vec_info *vinfo, tree bias = NULL_TREE; if (!costing_p) { - if (loop_masks && memory_access_type != VMAT_INVARIANT) + if (loop_masks) final_mask = vect_get_loop_mask (loop_vinfo, gsi, loop_masks, vec_num * ncopies, vectype, @@ -10729,7 +10729,7 @@ vectorizable_load (vec_info *vinfo, bias = build_int_cst (intQI_type_node, biasval); } - if (final_len && memory_access_type != VMAT_INVARIANT) + if (final_len) { tree ptr = build_int_cst (ref_type, align * BITS_PER_UNIT); -- 2.31.1