The current algorithm for performing bitwise and operations predates multi-ranges.  AS such, it has difficulty with complex signed operations when the range crosses the boundry between signed and unsigned.

In this case, we are looking at  [-INF, 128] & -9,  which basically gives up and returns varying.

We currently perform the operation on each range subpair, and union the results.    This patch recgonizes when a pair crosses the sign boundry, and further splits it for the algorithm into the signed and unsigned parts, and unions the results.  IN this case, we'll perform

[-INF, -1] & -9   and combine that with [0, 128] & -9 and get a better result without resorting to any other shenanigans.

Bootstrapped on x86_64-pc-linux-gnu with no regressions.

Pushed.

Andrew

From 5b18ba140733146243b79346fca3de0b989a9d92 Mon Sep 17 00:00:00 2001
From: Andrew MacLeod <[email protected]>
Date: Thu, 23 Oct 2025 17:19:02 -0400
Subject: [PATCH 2/2] Split signed bitwise AND operations.

The algorithm for bitwise AND struggles with complex signed operations
which cross the signed/unsigned barrier.  When this happens, process it
as 2 seperate ranges [LB, -1] and [0, UB], and combine the results.

	PR tree-optimization/114725.c
	gcc/
	* range-op.cc (operator_bitwise_and::wi_fold): Split signed
	operations crossing zero into 2 operations.

	gcc/testsuite/
	* gcc.dg/pr114725.c: New.
---
 gcc/range-op.cc                 | 16 ++++++++++++++++
 gcc/testsuite/gcc.dg/pr114725.c | 15 +++++++++++++++
 2 files changed, 31 insertions(+)
 create mode 100644 gcc/testsuite/gcc.dg/pr114725.c

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 762fd349e5f..6b6bf78cb2f 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -3522,6 +3522,22 @@ operator_bitwise_and::wi_fold (irange &r, tree type,
 			       const wide_int &rh_lb,
 			       const wide_int &rh_ub) const
 {
+  // The AND algorithm does not handle complex signed operations well.
+  // If a signed range crosses the boundry between signed and unsigned
+  // proces sit as 2 ranges and union the results.
+  if (TYPE_SIGN (type) == SIGNED
+      && wi::neg_p (lh_lb, SIGNED) != wi::neg_p (lh_ub, SIGNED))
+    {
+      int prec = TYPE_PRECISION (type);
+      int_range_max tmp;
+      // Process [lh_lb, -1]
+      wi_fold (tmp, type, lh_lb, wi::minus_one (prec), rh_lb, rh_ub);
+      // Now Process [0, rh_ub]
+      wi_fold (r, type, wi::zero (prec), lh_ub, rh_lb, rh_ub);
+      r.union_ (tmp);
+      return;
+    }
+
   if (wi_optimize_and_or (r, BIT_AND_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub))
     return;
 
diff --git a/gcc/testsuite/gcc.dg/pr114725.c b/gcc/testsuite/gcc.dg/pr114725.c
new file mode 100644
index 00000000000..01c31399ba8
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr114725.c
@@ -0,0 +1,15 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+//* This should fold to return 0
+bool
+src(int offset)
+{
+  if (offset > 128)
+    return 0;
+  else
+    return (offset & -9) == 258;
+}
+
+/* { dg-final { scan-tree-dump "return 0"  "optimized" } } */
+/* { dg-final { scan-tree-dump-not "PHI"  "optimized" } } */
-- 
2.45.0

Reply via email to