Of course, one needs the patch...

On 10/28/20 10:04 PM, Andrew MacLeod wrote:
The ranger currently handles "inferred" non-nulls with a special cache..

inferred ranges are thing like

a_2 = *b_1

the dereference of b_1 implies that any use of b_1 after that statement is non-null  (or a trap would have happened).  The problem is that most ranges are calculated as a result of the statement, not as a use.   To deal with that, I created a mini cache for non-nulls.

The first time  a request is made for a pointers range, a quick visit to each use is made to see if that use infers the non-null property, and a bit is set for the block it occurs in. When calculating ranges, this bitmap for the name is consulted during range propagation to see if a range is actually non-null


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97609 is trapping because an interesting sequence of events is happening which results in a stmt which is in the middle of being adjusted has a query made in-flight, and , well, boom. a PR happens.

The routine I was calling infer_value_range()  was doing more work than the cache requires, and some of this extra work was looking closely at the stmt... and the stmt wasnt valid gimple at the time. It turns out, the cache doesnt need this extra work, and its actually easier to simply call the routine we really want directly.

On a side note, going into the next stage 1, I plan to replace this approach with a more generalize solution that has been briefly mentioned. We can add another API entry point to range-ops and the ranger to deal with side effects.  currently we can ask for
1) the range ON (before) a stmt is executed  (range_of_expr)
2) the range ON an edge
3) the range OF a stmt (the result of the stmt)

I want to add a more generalized version of 3 which would be range AFTER stmt   which works for things other than the result of a stmt...
.4) range_after_stmt   which would include things like the dereference:

a_2 = *b_1      // range after would provide  b_1 = [1, +INF]
d_4 = f_4 / g_8    // range after would provide g_8 = ~[0,0]

Regardless... thats months away.  for now this patch avoids calling those things that dont need to be called and bypasses the erroneous in-flight analysis of the stmt.

Bootstrapped on x86_64-pc-linux-gnu, no regressions, pushed.
Andrew




commit 0162d00d12be24ee3f02ce876adafeaa91c6f7f9
Author: Andrew MacLeod <amacl...@redhat.com>
Date:   Wed Oct 28 16:41:15 2020 -0400

    Call infer_non_null() directly when checking for non-null.
    
    Simply call infer_non_null directly and avoid uneccessary checks of
    the statement being modified.
    
            gcc/
            PR tree-optimization/97609
            * gimple-range-cache.cc (non_null_ref::process_name): Call
            infer_nonnull_range directly instead of infer_value_range.
            gcc/testsuite/
            * g++.dg/pr97609.C: New.

diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 13b9933cc01..bc9243c1279 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -91,19 +91,15 @@ non_null_ref::process_name (tree name)
     {
       gimple *s = USE_STMT (use_p);
       unsigned index = gimple_bb (s)->index;
-      tree value;
-      enum tree_code comp_code;
 
       // If bit is already set for this block, dont bother looking again.
       if (bitmap_bit_p (b, index))
 	continue;
 
-      // If we can infer a != 0 range, then set the bit for this BB
-      if (infer_value_range (s, name, &comp_code, &value))
-	{
-	  if (comp_code == NE_EXPR && integer_zerop (value))
-	    bitmap_set_bit (b, index);
-	}
+      // If we can infer a nonnull range, then set the bit for this BB
+      if (!SSA_NAME_OCCURS_IN_ABNORMAL_PHI (name)
+	  && infer_nonnull_range (s, name))
+	bitmap_set_bit (b, index);
     }
 
   m_nn[v] = b;
diff --git a/gcc/testsuite/g++.dg/pr97609.C b/gcc/testsuite/g++.dg/pr97609.C
new file mode 100644
index 00000000000..8e582c9ad49
--- /dev/null
+++ b/gcc/testsuite/g++.dg/pr97609.C
@@ -0,0 +1,46 @@
+// PR tree-optimization/97609
+// { dg-do compile { target c++11 } }
+// { dg-options "-O2 -fno-tree-fre -fnon-call-exceptions" }
+
+struct _Fwd_list_node_base {
+  _Fwd_list_node_base *_M_next;
+  void _M_transfer_after() { _Fwd_list_node_base *__keep = _M_next = __keep; }
+};
+struct _Fwd_list_const_iterator {
+  _Fwd_list_const_iterator(_Fwd_list_node_base *__n) : _M_node(__n) {}
+  _Fwd_list_const_iterator(int);
+  _Fwd_list_node_base *_M_node;
+};
+template <typename, typename> struct forward_list {
+  _Fwd_list_node_base _M_head;
+  template <typename _InputIterator>
+  forward_list(_InputIterator, _InputIterator);
+  forward_list(int);
+  _Fwd_list_const_iterator cbefore_begin() { return &_M_head; }
+  void splice_after(_Fwd_list_const_iterator) noexcept;
+  void splice_after(_Fwd_list_const_iterator __pos, forward_list &) {
+    splice_after(__pos, 0);
+  }
+  using __remove_return_type = void;
+  __remove_return_type unique() { unique(0); }
+  template <typename _BinPred> __remove_return_type unique(_BinPred);
+};
+template <typename _Tp, typename _Alloc>
+void forward_list<_Tp, _Alloc>::splice_after(_Fwd_list_const_iterator __pos)
+  noexcept {
+  __pos._M_node->_M_transfer_after();
+}
+template <typename _Tp, typename _Alloc>
+template <typename _BinPred>
+auto forward_list<_Tp, _Alloc>::unique(_BinPred) -> __remove_return_type {
+  forward_list __to_destroy(0);
+  splice_after(__to_destroy.cbefore_begin());
+}
+
+void
+foo ()
+{
+  forward_list<int, int> c1 (0, 0);
+  c1.unique ();
+}
+

Reply via email to