On Wed, 17 Aug 2011, Richard Sandiford wrote: > It also means > that constants that are slightly more expensive than a register -- > somewhere in the range [0, COSTS_N_INSNS (1)] -- end up seeming > cheaper than registers.
Yes, perhaps some scale factor has to be applied to get reasonable cost granularity in an improved cost model for the job... Some constants *are* more expensive (extra words and/or extra cycles), yet preferable to registers for one (or maybe two or...) insns. You don't want to find that all insns except constant-loads suddenly use register arguments and no port-specific metric way to tweak it. Mentioned for the record. > By default, the cost of a SET is: > > COSTS_N_INSNS (1) > + rtx_cost (SET_DEST (x), SET, speed) > + rtx_cost (SET_SRC (x), SET, speed) 'k. > But it seems like there's some double-counting for complex SET_SRCs here. Bugs. > As others have said, it would be nice if costs could > be extracted from the .md file, but that's more work than I have time > for now. Yes, let's start by getting the semantics settled and implemented first. > Is it worth changing the costs anyway? IMHO, it's worth making it consistent, linear, TRT... > But that hardly seems clean either. Perhaps we should instead make > the SET_SRC always include the cost of the SET, even for registers, > constants and the like. Thoughts? Seems more of maintaining a wart than an improvement for consistency. I don't think you can get into trouble for trying to improve this area for consistency: between releases a port already usually arbitrarily loses on some type of codes and costs have to be re-tweaked, unless performance is closely tracked. Aiming for traceability can only help (like, "read the added doc blurb on how to define the port RTX costs instead of gdb stepping and code inspection"). brgds, H-P