Hi! On Sat, Jun 20, 2020 at 05:25:23PM +0100, Roger Sayle wrote: > It's great to hear from you again. It's been a while.
And from you! Yes, wow, it is 2020 now, how did that happen! > >On Fri, Jun 19, 2020 at 09:42:54PM +0100, Roger Sayle wrote: > >> My recent patch to add scalar integer simplification unit tests to > >> simplify_rtx_c_tests identified two "trivial" corner cases that could be > improved in simplify-rtx.c. > > > > These two things are independent changes and should be independent > patches. > > I'm about to ask you (very nicely) to please commit these changes for me, > so many thanks in advance if you're going to be making twice the effort. Yes, I'll do it (tomorrow, tired now). > >> Although it makes no sense to ever see a BImode ROTATE, the current > >> ordering of transformations in simplify_binary_operation_1 converts > >> (rotate:bi (reg:bi) 0) to (rotatert:bi (reg:bi) 1), which then doesn't > >> get simplified away. Rather than teach the middle-end that any > >> hypothetical ROTATE or ROTATERT of a BImode value is a no-op, a more > realistic invariant is that any rotate by const0_rtx is already canonical. > > > > I don't know of a rotate of BImode is defined at all, but that is beside > the point, the patch has nothing to do with BImode. > > A rotate by constant 0 should be simplified to just its argument, and that > should happen *before* where your patch makes changes. What goes wrong > there? > > This approach becomes clearer from seeing the larger context of this change: > > case ROTATERT: > case ROTATE: > /* Canonicalize rotates by constant amount. If op1 is bitsize / 2, > prefer left rotation, if op1 is from bitsize / 2 + 1 to > bitsize - 1, use other direction of rotate with 1 .. bitsize / 2 - > 1 > amount instead. */ > #if defined(HAVE_rotate) && defined(HAVE_rotatert) > if (CONST_INT_P (trueop1) > && IN_RANGE (INTVAL (trueop1), > GET_MODE_UNIT_PRECISION (mode) / 2 + (code == > ROTATE), > GET_MODE_UNIT_PRECISION (mode) - 1)) > { > int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL > (trueop1); > rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); > return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, > mode, op0, new_amount_rtx); > } > #endif > /* FALLTHRU */ > case ASHIFTRT: > if (trueop1 == CONST0_RTX (mode)) > return op0; > > Hence the handling of rotates and shifts by zero is the very next > optimization, after the fall-thru. Fall throughs are evil. I'm not kidding very much here :-) > One option would be to duplicate this const0 optimization before the rotate > canonicalization, then have > it tested a second time later (perhaps twice sequentially if the backend > doesn't support both rotates) > or add a goto. I believed the simplest fix is to correct the condition for > canonicalizing which shouldn't > fire when op1 is const0_rtx. That is the smallest change to make, certainly. But it doesn't result in the simplest (i.e. easiest to read) code. I'll see what I can do. Probably it turns out to be involved enough that I'll just do your change, but :-) > > This part is approved for trunk. Thanks! > > Thanks to you too. Alas, my credentials from the CVS days of GCC almost > certainly don't > work any more (in git), so I'm now reliant on the generosity of others to > push changes > into the tree. It still accepts my 2000-era RSA key, apparently (but not DSA any more :-( ) > I promise that if I make a habit of contributing regularly > again, I'll learn > the new ways of doing things, but it's a little awkward after being a > maintainer. Not too much changed! Well, maybe all the details did ;-) Segher