On Wed, Oct 19, 2022 at 5:18 AM Kewen.Lin <li...@linux.ibm.com> wrote: > > Hi, > > As PR107240 shows, when both the value to be shifted and the > count used for shifting are constants, it doesn't actually > requires a target to support vector shift operations. > > This patch is to try fold_build2 for the generation of the > shifted value first, if it's folded, the shift is gone, > otherwise it's the same as before. > > It can help to make the failures of vect-bitfield-write-{2,3}.c > gone on Power. > > Bootstrapped and regtested on x86_64-redhat-linux, > aarch64-linux-gnu and powerpc64{,le}-linux-gnu. > > Is it ok for trunk? > > BR, > Kewen > ----- > PR tree-optimization/107240 > > gcc/ChangeLog: > > * tree-vect-patterns.cc (vect_recog_bit_insert_pattern): Attempt to > fold shifted value. > --- > gcc/tree-vect-patterns.cc | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc > index 6afd57a50c4..3beda774ec3 100644 > --- a/gcc/tree-vect-patterns.cc > +++ b/gcc/tree-vect-patterns.cc > @@ -2098,9 +2098,11 @@ vect_recog_bit_insert_pattern (vec_info *vinfo, > stmt_vec_info stmt_info, > tree shifted = value; > if (shift_n) > { > + tree shifted_tmp > + = fold_build2 (LSHIFT_EXPR, container_type, value, shift); > pattern_stmt > = gimple_build_assign (vect_recog_temp_ssa_var (container_type), > - LSHIFT_EXPR, value, shift); > + shifted_tmp);
The canonical way would be to use gimple_seq stmts = NULL; shifted = gimple_build (&stmts, LSHIFT_EXPR, container_type, value, shift); if (!gimple_seq_empty_p (stmts)) append_pattern_def_seq (vinfo, stmt_info, gimple_seq_first_stmt (stmts)); That also avoids the spurious val = constant; with your patch. OK if that works. thanks, Richard. > append_pattern_def_seq (vinfo, stmt_info, pattern_stmt); > shifted = gimple_get_lhs (pattern_stmt); > } > -- > 2.27.0