On Wed, 11 Sep 2013, Wei Mi wrote:
> I tried that and it caused some regressions, so I choosed to do
> chain_to_prev_insn another time in add_branch_dependences. There could
> be some dependence between those two functions.
(please don't top-post on this list)
In that case you can adjust 'last' in add_branch_dependences so that the
dependences pin the compare rather than the jump to the end, like this
(untested):
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 2c971e2..a774d5d 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -2443,6 +2443,9 @@ add_branch_dependences (rtx head, rtx tail)
cc0 setters remain at the end because they can't be moved away from
their cc0 user.
+ Predecessors of SCHED_GROUP_P instructions that remain at the end also
+ remain at the end.
+
COND_EXEC insns cannot be moved past a branch (see e.g. PR17808).
Insns setting TARGET_CLASS_LIKELY_SPILLED_P registers (usually return
@@ -2465,6 +2468,7 @@ add_branch_dependences (rtx head, rtx tail)
#endif
|| (!reload_completed
&& sets_likely_spilled (PATTERN (insn)))))
+ || (last != 0 && SCHED_GROUP_P (last))
|| NOTE_P (insn))
{
if (!NOTE_P (insn))
I'm also not a fan of adding two scheduler hooks and explicit handling in
sched-deps.c for this feature. You probably could handle that with sched_init
hook entirely in the x86 backend (just loop over basic blocks and mark
suitable jumps with SCHED_GROUP_P), but on the other hand I can see an
argument that this might be useful in the future for other architectures.
Have you considered that? What do other maintainers say?
Thanks.
Alexander