On 19 Nov 12:02, Jeff Law wrote: > On 11/18/13 14:03, Ilya Enkovich wrote: > >2013/11/19 Jeff Law <l...@redhat.com>: > >>On 11/18/13 12:16, Ilya Enkovich wrote: > >>> > >>>With current recursion elimination we will have: > >>> > >>>test (int *param1) > >>>{ > >>><bb1>: > >>> > >>><bb2>: > >>> _7 = PHI<param1(D) (bb1), _6 (bb2)> > >>> bounds2 = __builtin_arg_bounds (_7) -- WRONG > >> > >>I wouldn't say it's wrong. It's conservatively correct if properly > >>implemented. Why precisely do you consider this wrong? If your code can't > >>handle it, it really should be fixed. > > > >It is wrong because __builtin_arg_bounds is used to get bounds of > >input parameter and PHI node here is not an input parameter. > OK, now we're getting somewhere. So we've got this odd little > function which only works on parameters. I can't help but feel > this is a bit of mis-design coming back to haunt us and I wonder if > we're going to see other instances of this kind of problem. > > There's something just wrong with the semantics of > __builtin_arg_bounds, but I'm having trouble putting my finger on > it. > > > Correctl > >handling of __builtin_arg_bounds in this 'wrong' example requires > >adding it a PHI node semantics based on it's arg. For me it seems > >more complex then generating a regular PHI node for bounds and makes > >GIMPLE less readable. > But presumably you have code to merge bound information already, > right? You need that for PHI nodes. Can't you use that to walk up > the use-def chains and build the bounds information? > > Or if you want to fix the tailcall stuff, you can start down that > direction. I don't think that'll fix the nagging concerns about the > overall semantics of builtin_arg_bounds, but it might be enough for > you to go forward. > > jeff
Here are my thoughs on what tail recursion elimination fix should look like. This patch moves all BUILT_IN_CHKP_ARG_BND calls before the recursion target bb and creates PHI nodes for bounds. I tried it on a simple test case only: int *bar (int *); void foo (int *p) { int *p1 = bar (p); if (p1) foo (p1); } Tail recursion results in a GIMPLE: foo (int * p) { <unnamed type> __bound_tmp.0; int * p1; void * _8; void * _10; <bb 2>: __bound_tmp.0_12 = __builtin_ia32_arg_bnd (p_11(D)); [return slot optimization] <bb 3>: # p_3(ab) = PHI <p_11(D)(2), _10(4)> # __bound_tmp.0_7 = PHI <__bound_tmp.0_12(2), __bound_tmp.0_9(4)> _8 = __builtin___chkp_bind_bounds (p_3(ab), __bound_tmp.0_7); p1_5 = bar (_8); __bound_tmp.0_9 = __builtin_ia32_bndret (p1_5); [return slot optimization] if (p1_5 != 0B) goto <bb 4>; else goto <bb 5>; <bb 4>: _10 = __builtin___chkp_bind_bounds (p1_5, __bound_tmp.0_9); goto <bb 3>; <bb 5>: return; } Here __bound_tmp.0_7 was created to get input bounds __bound_tmp.0_12 and __bound_tmp.0_12 passed to the tail call. If you feel it is good to have such support now instead of disabling tail recursion elimination - I'll continue working on this patch. Thanks, Ilya -- 2013-11-19 Ilya Enkovich <ilya.enkov...@intel.com> * tree-tailcall.c (eliminate_tail_call): Fill PHI nodes created for bounds wit bounds passed to the removed call. (tree_optimize_tail_calls_1): Move all BUILT_IN_CHKP_ARG_BND calls before entry point of recursive call. Add PHI nodes for bounds. diff --git a/gcc/tree-tailcall.c b/gcc/tree-tailcall.c index 59845950..4e04199 100644 --- a/gcc/tree-tailcall.c +++ b/gcc/tree-tailcall.c @@ -814,11 +800,12 @@ eliminate_tail_call (struct tailcall *t) gimple stmt, call; tree arg; size_t idx; - basic_block bb, first; + basic_block bb, first, arg_bnd_bb; edge e; gimple phi; gimple_stmt_iterator gsi; gimple orig_stmt; + tree default_bounds; stmt = orig_stmt = gsi_stmt (t->call_gsi); bb = gsi_bb (t->call_gsi); @@ -834,6 +821,20 @@ eliminate_tail_call (struct tailcall *t) gcc_assert (is_gimple_call (stmt)); first = single_succ (ENTRY_BLOCK_PTR); + /* Skip bb created for arg_bnd calls. */ + for (param = DECL_ARGUMENTS (current_function_decl); + param; + param = DECL_CHAIN (param)) + if (ssa_default_def (cfun, param)) + { + tree bnd = chkp_argbnd_by_val (ssa_default_def (cfun, param)); + if (bnd && gimple_bb (SSA_NAME_DEF_STMT (bnd)) == first) + { + arg_bnd_bb = first; + first = single_succ (first); + break; + } + } /* Remove the code after call_gsi that will become unreachable. The possibly unreachable code in other blocks is removed later in @@ -881,6 +882,39 @@ eliminate_tail_call (struct tailcall *t) add_phi_arg (phi, arg, e, gimple_location (stmt)); gsi_next (&gsi); + + /* Fill bounds PHI node with bounds if it follows PHI for arg. */ + if (!gsi_end_p (gsi) + && POINTER_BOUNDS_P (gimple_phi_result (gsi_stmt (gsi)))) + { + tree bounds = NULL; + + phi = gsi_stmt (gsi); + if (TREE_CODE (arg) == SSA_NAME) + bounds = chkp_get_call_arg_bounds (arg); + + /* If no bounds are passed to the call, use default ones. */ + if (!bounds) + { + if (!default_bounds) + { + gimple_stmt_iterator iter = gsi_start_bb (arg_bnd_bb); + gimple assign; + + default_bounds = make_temp_ssa_name (pointer_bounds_type_node, + gimple_build_nop (), + ""); + assign = gimple_build_assign (default_bounds, + chkp_get_zero_bounds_var ()); + gsi_insert_before (&iter, assign, GSI_CONTINUE_LINKING); + } + + bounds = default_bounds; + } + + add_phi_arg (phi, bounds, e, gimple_location (stmt)); + gsi_next (&gsi); + } } /* Update the values of accumulators. */ @@ -961,6 +995,7 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls) struct tailcall *tailcalls = NULL, *act, *next; bool changed = false; basic_block first = single_succ (ENTRY_BLOCK_PTR); + basic_block bnd_arg_bb = NULL; tree param; gimple stmt; edge_iterator ei; @@ -1003,6 +1038,7 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls) if (arg_needs_copy_p (param)) { tree name = ssa_default_def (cfun, param); + tree bounds = chkp_argbnd_by_val (name); tree new_name = make_ssa_name (param, SSA_NAME_DEF_STMT (name)); gimple phi; @@ -1010,6 +1046,32 @@ tree_optimize_tail_calls_1 (bool opt_tailcalls) phi = create_phi_node (name, first); add_phi_arg (phi, new_name, single_pred_edge (first), EXPR_LOCATION (param)); + + /* If bounds of param are used, it also requires + PHI node to be creates for them. */ + if (bounds) + { + gimple arg_bnd = SSA_NAME_DEF_STMT (bounds); + gimple_stmt_iterator iter = gsi_for_stmt (arg_bnd); + + /* BUILT_IN_CHKP_ARG_BND is moved up and it's arg + should be replaced with the new default value + of param. */ + if (!bnd_arg_bb) + bnd_arg_bb + = split_edge (single_succ_edge (ENTRY_BLOCK_PTR)); + gimple_call_set_arg (arg_bnd, 0, new_name); + gsi_move_to_bb_end (&iter, bnd_arg_bb); + + /* Change SSA_NAME used for input bounds. */ + new_name = make_ssa_name (SSA_NAME_VAR (bounds), arg_bnd); + gimple_call_set_lhs (arg_bnd, new_name); + + /* Now create PHI node for bounds. */ + phi = create_phi_node (bounds, first); + add_phi_arg (phi, new_name, single_pred_edge (first), + EXPR_LOCATION (param)); + } } phis_constructed = true; }