On Mon, Sep 23, 2024 at 3:41 PM Jason Merrill <ja...@redhat.com> wrote:
>
> On 9/23/24 9:05 AM, Richard Biener wrote:
> > On Sat, Sep 21, 2024 at 2:49 AM Jason Merrill <ja...@redhat.com> wrote:
> >>
> >> Tested x86_64-pc-linux-gnu. OK for trunk?
> >>
> >> -- 8< --
> >>
> >> We've been using -Wno-narrowing since gcc 4.7, but at this point narrowing
> >> diagnostics seem like a stable part of C++ and we should adjust.
> >>
> >> This patch changes -Wno-narrowing to -Wno-error=narrowing so that narrowing
> >> issues will still not break bootstrap, but we can see them.
> >>
> >> The rest of the patch fixes the narrowing warnings I see in an
> >> x86_64-pc-linux-gnu bootstrap. In most of the cases, by adjusting the
> >> types
> >> of various declarations so that we store the values in the same types we
> >> compute them in, which seems worthwhile anyway. This also allowed us to
> >> remove a few -Wsign-compare casts.
> >>
> >> The one place I didn't see how best to do this was in
> >> vect_prologue_cost_for_slp: changing const_nunits to unsigned int broke the
> >> call to TYPE_VECTOR_SUBPARTS (vectype).is_constant (&const_nunits), since
> >> poly_uint64.is_constant wants a pointer to unsigned HOST_WIDE_INT. So I
> >> added casts in that one place. Not too bad, I think.
> >>
> >> + unsigned HOST_WIDE_INT foff = bitpos_of_field (field);
> >
> > Can you make bitpos_of_field return unsigned HOST_WIDE_INT then and adjust
> > it
> > accordingly - it looks for shwi fitting but negative DECL_FIELD_OFFSET
> > or BIT_OFFSET are not a thing.
>
> So, like the attached?
>
> >> @@ -7471,7 +7471,8 @@ vect_prologue_cost_for_slp (slp_tree node,
> >> nelt_limit = const_nunits;
> >> hash_set<vect_scalar_ops_slice_hash> vector_ops;
> >> for (unsigned int i = 0; i < SLP_TREE_NUMBER_OF_VEC_STMTS (node);
> >> ++i)
> >> - if (!vector_ops.add ({ ops, i * const_nunits, const_nunits }))
> >
> > So why do we diagnose this case (unsigned int member) but not ...
> >
> >> + if (!vector_ops.add
> >> + ({ ops, i * (unsigned)const_nunits, (unsigned)const_nunits }))
> >> starts.quick_push (i * const_nunits);
> >
> > ... this one - unsigned int function argument?
>
> Because the former is in { }, and the latter isn't; narrowing
> conversions are only ill-formed within { }.
>
> > I think it would be slightly better to do
> >
> > {
> > unsigned start = (unsigned) const_units * i;
> > if (!vector_ops.add ({ ops, start, const_unints }))
> > starts.quick_push (start);
> > }
> >
> > to avoid the non-obvious difference between both.
>
> We'd still need the cast for the third element, but now I notice we can
> use nelt_limit instead since it just got the same value.
>
> So, OK with this supplemental patch?
OK.
Thanks,
Richard.