On 09/11/2015 02:49 AM, Kyrill Tkachov wrote:

On 10/09/15 22:11, Jeff Law wrote:
On 09/10/2015 12:23 PM, Bernd Schmidt wrote:
  > No testcase provided, as currently I don't know of targets with a
high
  > enough branch cost to actually trigger the optimisation.

Hmm, so the code would not actually be used right now? In that case I'll
leave it to others to decide whether we want to apply it. Other than the
points above it looks OK to me.
Some targets have -mbranch-cost to allow overriding the default costing.
   visium has a branch cost of 10!  Several ports have a cost of 6 either
unconditionally or when the branch is not well predicted.

Presumably James is more interested in the ARM/AArch64 targets ;-)

I think that's probably what James is most interested in getting some
ideas around -- the cost model.

I think the fundamental problem is BRANCH_COST isn't actually relative
to anything other than the default value of "1".  It doesn't directly
correspond to COSTS_N_INSNS or anything else.  So while using
COSTS_N_INSNS (BRANCH_COST)) would seem to make sense, it actually
doesn't.  It's not even clear how a value of 10 relates to a value of 1
other than it's more expensive.

ifcvt (and others) comparing to magic #s is more than a bit lame.  But
with BRANCH_COST having no meaning relative to anything else I can see
why Richard did things that way.

Out of interest, what was the intended original meaning
of branch costs if it was not to be relative to instructions?
I don't think it ever had one. It's self-relative. A cost of 2 is greater than a cost of 1. No more, no less IIRC. Lame? Yes. Short-sighted? Yes. Should we try to fix it. Yes.

If you look at how BRANCH_COST actually gets used, AFAIK it's tested only against "magic constants", which are themselves lame, short-sighted and need to be fixed.

jeff

Reply via email to