Richard Sandiford schrieb:
I've been working on some patches to make insn_rtx_cost take account
of the cost of SET_DESTs as well as SET_SRCs. But I'm slowly beginning
to realise that I don't understand what rtx costs are supposed to represent.
AIUI the rules have historically been:
1) Registers have zero cost.
2) Constants have a cost relative to that of registers. By extension,
constants have zero cost if they are as cheap as a register.
3) With an outer code of SET, actual operations have the cost
of the associated instruction. E.g. the cost of a PLUS
is the cost of an addition instruction.
4) With other outer codes, actual operations have the cost
of the combined instruction, if available, or the cost of
a separate instruction otherwise. E.g. the cost of a NEG
inside an AND might be zero on targets that support BIC-like
instructions, and COSTS_N_INSNS (1) on most others.
[...]
But that hardly seems clean either. Perhaps we should instead make
the SET_SRC always include the cost of the SET, even for registers,
constants and the like. Thoughts?
IMO a clean approach would be to query the costs of a whole insn (resp.
it's pattern) rather than the cost of an RTX. COSTS_N_INSNS already
indicates that the costs are compared to *insn* costs i.e. cost of the
whole pattern (modulo clobbers).
E.g. the cost of a CONST_INT is meaningless if you don't know what to do
with the constant. (set (reg:QI) (const_int 0)) might have different
cost than (set (reg:SI) (const_int 0)). Similar applies for an addition
if you don't know if it's for arithmetic or address offset computation
inside a MEM.
To distinguish a proper SET from a jump you have to analyze the RTX or
guess that it's a jump because the rtx has VOIDmode. All that is
information which is readily available; many parts in the compiler even
strip the SET and just pass the SET_DEST down to the backend and thus
hide information the backend needs.
A backend mostly thinks in terms of insns, and passing the pattern down
to the backend (e.g. in insn combine which has the newly synthesized
insn handy, anyway) would enable the backend to use recog and insn
attributes to get the insn cost.
I use that in a port that has complex ISA and bunch of combine patterns:
Each such pattern has two attributes "size" and "speed" that tell the
rtx costs of that insn with respect to size or speed optimization. That
approach is much more legible compared to explicitely writing down
XEXP-orgy.
The difference between "size" and "length" is that "length" tells the
size (or at least the maximum possible size) in bytes of an insns
whereas "size" tells the estimated value over all alternatives.
Look as the rtx_costs of AVR beckend: it's horrible code that looks like
hand-written insn-recog.c! It's annoying to write, hard to track and
duplicates code (which is already there in insn-recog, insn-extraxt, as
RTL in md, etc.) Using cost attributes would make is possible to
describe the costs in the place they are generated: that's the insn
which is responsible for it -- and not as hundreds of cryptic C lines...
Besides that, it apprears to be unclear what a cost of 0 means: Some
parts seem to treat is as "cost nothing", others as "don't know".
(The patches I've been working on also tell rtx_costs -- and the target
hook -- whether the operand is an lvalue. That means that stores can
have a different cost from loads.)
I think IRA needed better way to tell cost besides the !, ? and *.
For example, it is not possible at the moment to tell that
(set (reg:SI 2) (mem:SI (reg:SI 0)))
could be cheaper than
(set (reg:SI 2) (mem:SI (reg:SI 1)))
if, e.g. R0 is an implicit register but R1 is not.
Johann
Richard