On 9/23/24 9:05 AM, Richard Biener wrote:
On Sat, Sep 21, 2024 at 2:49 AM Jason Merrill <ja...@redhat.com> wrote:

Tested x86_64-pc-linux-gnu.  OK for trunk?

-- 8< --

We've been using -Wno-narrowing since gcc 4.7, but at this point narrowing
diagnostics seem like a stable part of C++ and we should adjust.

This patch changes -Wno-narrowing to -Wno-error=narrowing so that narrowing
issues will still not break bootstrap, but we can see them.

The rest of the patch fixes the narrowing warnings I see in an
x86_64-pc-linux-gnu bootstrap.  In most of the cases, by adjusting the types
of various declarations so that we store the values in the same types we
compute them in, which seems worthwhile anyway.  This also allowed us to
remove a few -Wsign-compare casts.

The one place I didn't see how best to do this was in
vect_prologue_cost_for_slp: changing const_nunits to unsigned int broke the
call to TYPE_VECTOR_SUBPARTS (vectype).is_constant (&const_nunits), since
poly_uint64.is_constant wants a pointer to unsigned HOST_WIDE_INT.  So I
added casts in that one place.  Not too bad, I think.

+       unsigned HOST_WIDE_INT foff = bitpos_of_field (field);

Can you make bitpos_of_field return unsigned HOST_WIDE_INT then and adjust it
accordingly - it looks for shwi fitting but negative DECL_FIELD_OFFSET
or BIT_OFFSET are not a thing.

So, like the attached?

@@ -7471,7 +7471,8 @@ vect_prologue_cost_for_slp (slp_tree node,
        nelt_limit = const_nunits;
        hash_set<vect_scalar_ops_slice_hash> vector_ops;
        for (unsigned int i = 0; i < SLP_TREE_NUMBER_OF_VEC_STMTS (node); ++i)
-       if (!vector_ops.add ({ ops, i * const_nunits, const_nunits }))

So why do we diagnose this case (unsigned int member) but not ...

+       if (!vector_ops.add
+           ({ ops, i * (unsigned)const_nunits, (unsigned)const_nunits }))
           starts.quick_push (i * const_nunits);

... this one - unsigned int function argument?

Because the former is in { }, and the latter isn't; narrowing conversions are only ill-formed within { }.

I think it would be slightly better to do

         {
            unsigned start = (unsigned) const_units * i;
            if (!vector_ops.add ({ ops, start, const_unints }))
              starts.quick_push (start);
         }

to avoid the non-obvious difference between both.

We'd still need the cast for the third element, but now I notice we can use nelt_limit instead since it just got the same value.

So, OK with this supplemental patch?
From 8c4b43c76d07b5e1638123588bda7196740c0211 Mon Sep 17 00:00:00 2001
From: Jason Merrill <ja...@redhat.com>
Date: Mon, 23 Sep 2024 08:49:13 -0400
Subject: [PATCH] gcc: narrowing tweaks
To: gcc-patches@gcc.gnu.org

gcc/ChangeLog:

	* tree-ssa-structalias.cc (bitpos_of_field): Return unsigned.
	* tree-vect-slp.cc (vect_prologue_cost_for_slp): Use nelt_limit
	instead of casts.
---
 gcc/tree-ssa-structalias.cc | 10 +++++-----
 gcc/tree-vect-slp.cc        |  5 ++---
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/gcc/tree-ssa-structalias.cc b/gcc/tree-ssa-structalias.cc
index 7adb50a896d..d6a53f801f0 100644
--- a/gcc/tree-ssa-structalias.cc
+++ b/gcc/tree-ssa-structalias.cc
@@ -3220,15 +3220,15 @@ process_constraint (constraint_t t)
 /* Return the position, in bits, of FIELD_DECL from the beginning of its
    structure.  */
 
-static HOST_WIDE_INT
+static unsigned HOST_WIDE_INT
 bitpos_of_field (const tree fdecl)
 {
-  if (!tree_fits_shwi_p (DECL_FIELD_OFFSET (fdecl))
-      || !tree_fits_shwi_p (DECL_FIELD_BIT_OFFSET (fdecl)))
+  if (!tree_fits_uhwi_p (DECL_FIELD_OFFSET (fdecl))
+      || !tree_fits_uhwi_p (DECL_FIELD_BIT_OFFSET (fdecl)))
     return -1;
 
-  return (tree_to_shwi (DECL_FIELD_OFFSET (fdecl)) * BITS_PER_UNIT
-	  + tree_to_shwi (DECL_FIELD_BIT_OFFSET (fdecl)));
+  return (tree_to_uhwi (DECL_FIELD_OFFSET (fdecl)) * BITS_PER_UNIT
+	  + tree_to_uhwi (DECL_FIELD_BIT_OFFSET (fdecl)));
 }
 
 
diff --git a/gcc/tree-vect-slp.cc b/gcc/tree-vect-slp.cc
index bfdc4f7dec9..af137577333 100644
--- a/gcc/tree-vect-slp.cc
+++ b/gcc/tree-vect-slp.cc
@@ -7471,9 +7471,8 @@ vect_prologue_cost_for_slp (slp_tree node,
       nelt_limit = const_nunits;
       hash_set<vect_scalar_ops_slice_hash> vector_ops;
       for (unsigned int i = 0; i < SLP_TREE_NUMBER_OF_VEC_STMTS (node); ++i)
-	if (!vector_ops.add
-	    ({ ops, i * (unsigned)const_nunits, (unsigned)const_nunits }))
-	  starts.quick_push (i * const_nunits);
+	if (!vector_ops.add ({ ops, i * nelt_limit, nelt_limit }))
+	  starts.quick_push (i * nelt_limit);
     }
   else
     {
-- 
2.46.0

Reply via email to