Bootstrapped and regtested on x86_64-pc-linux-gnu, does this look
OK for stage 1?

-- >8 --

Here the PR's testcase demonstrates that the cp_fully_fold calls in
cp_build_binary_op (for diagnosing arithmetic overflow) lead to
quadratic behavior when building up a large arithmetic constant
expression.  The problem is ultimately that maybe_constant_value's
caching doesn't reuse intermediate values, unlike cp_fold.  (And
unfortunately we can't leverage the cp_fold cache in this call site
because we want to always evaluate constexpr calls even in -O0, which
cp_fold avoids.)

This patch fixes this by making maybe_constant_value look up each
operand of the given expression to see if we've previously reduced it,
and if so, rebuild the expression using the (presumably) reduced
operands and evaluate that.  After this patch each version of the
testcase from the PR compiles in ~0.2s on my machine.

        PR c++/118340

gcc/cp/ChangeLog:

        * constexpr.cc (maybe_constant_value): First try looking up each
        operand in the cv_cache and reusing the result.
---
 gcc/cp/constexpr.cc | 33 ++++++++++++++++++++++++++++++++-
 1 file changed, 32 insertions(+), 1 deletion(-)

diff --git a/gcc/cp/constexpr.cc b/gcc/cp/constexpr.cc
index 58bed6f8965..390a766da13 100644
--- a/gcc/cp/constexpr.cc
+++ b/gcc/cp/constexpr.cc
@@ -9271,8 +9271,35 @@ tree
 maybe_constant_value (tree t, tree decl /* = NULL_TREE */,
                      mce_value manifestly_const_eval /* = mce_unknown */)
 {
+  tree orig_t = t;
   tree r;
 
+  if (EXPR_P (t) && manifestly_const_eval == mce_unknown)
+    {
+      /* Look up each operand in the cv_cache first to see if we've already
+        reduced it, and reuse that result to avoid quadratic behavior if
+        we're called when building up a large expression.  */
+      int n = cp_tree_operand_length (t);
+      tree *ops = XALLOCAVEC (tree, n);
+      bool rebuild = false;
+      for (int i = 0; i < n; ++i)
+       {
+         ops[i] = TREE_OPERAND (t, i);
+         if (tree *cached = hash_map_safe_get (cv_cache, ops[i]))
+           if (*cached != ops[i])
+             {
+               ops[i] = *cached;
+               rebuild = true;
+             }
+       }
+      if (rebuild)
+       {
+         t = copy_node (t);
+         for (int i = 0; i < n; ++i)
+           TREE_OPERAND (t, i) = ops[i];
+       }
+    }
+
   if (!is_nondependent_constant_expression (t))
     {
       if (TREE_OVERFLOW_P (t)
@@ -9290,6 +9317,10 @@ maybe_constant_value (tree t, tree decl /* = NULL_TREE 
*/,
     return fold_to_constant (t);
 
   if (manifestly_const_eval != mce_unknown)
+    /* TODO: Extend the cache to be mce_value aware.  And if we have a
+       previously cached mce_unknown result that's TREE_CONSTANT, it means
+       the reduced value is independent of mce_value and so we should
+       be able to reuse it in the mce_true/false case.  */
     return cxx_eval_outermost_constant_expr (t, true, true,
                                             manifestly_const_eval, false, 
decl);
 
@@ -9319,7 +9350,7 @@ maybe_constant_value (tree t, tree decl /* = NULL_TREE */,
                       || (TREE_CONSTANT (t) && !TREE_CONSTANT (r))
                       || !cp_tree_equal (r, t));
   if (!c.evaluation_restricted_p ())
-    cv_cache->put (t, r);
+    cv_cache->put (orig_t, r);
   return r;
 }
 
-- 
2.48.1.220.gbc204b7427

Reply via email to