"Linthicum, Tony" <[EMAIL PROTECTED]> writes: > * Would using a tree-level API like estimate_num_insns be superior > to using a simple cost of 1 per statment?
For this sort of calculation, I would guess not. I would imagine that on most processors the cost of a single vector instruction is comparable to the cost of the same operation on scalar operands. So simply counting instructions should give you a good indication of whether vectorization is profitable, even though each instruction will of course have a different cost. If this turns out not to be the case, then I think the next step would be to have a target defined multiplier for vector instructions, indicating that for that target vector instructions were more or less expensive than scalar instructions according to some ratio. I doubt that using the full fledged cost infrastructure would give you better results than that in practice. > * What is the best way to access target level cost information? I'm sure you know that the loop code does this by generating RTL and asking for the cost of it (computation_cost in tree-ssa-loop-ivopts.c). That's a rather awkward approach but we don't have a better one. At some point we may need target specific costs for tree nodes, but most likely not unless and until we move many of the RTL passes to operate on trees. Ian