On 9/28/22 07:19, Tamar Christina wrote:
-----Original Message-----
From: Jeff Law <jeffreya...@gmail.com>
Sent: Saturday, September 24, 2022 8:38 PM
To: Tamar Christina <tamar.christ...@arm.com>; gcc-patches@gcc.gnu.org
Cc: nd <n...@arm.com>; rguent...@suse.de
Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into
BIT_FIELD_REFs alone
On 9/23/22 05:42, Tamar Christina wrote:
Hi All,
This adds a match.pd rule that can fold right shifts and
bit_field_refs of integers into just a bit_field_ref by adjusting the
offset and the size of the extract and adds an extend to the previous size.
Concretely turns:
#include <arm_neon.h>
unsigned int foor (uint32x4_t x)
{
return x[1] >> 16;
}
which used to generate:
_1 = BIT_FIELD_REF <x_2(D), 32, 32>;
_3 = _1 >> 16;
into
_4 = BIT_FIELD_REF <x_1(D), 16, 48>;
_2 = (unsigned int) _4;
I currently limit the rewrite to only doing it if the resulting
extract is in a mode the target supports. i.e. it won't rewrite it to
extract say 13-bits because I worry that for targets that won't have a
bitfield extract instruction this may be a de-optimization.
Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
and no issues.
Testcase are added in patch 2/2.
Ok for master?
Thanks,
Tamar
gcc/ChangeLog:
* match.pd: Add bitfield and shift folding.
Were you planning to handle left shifts as well? It looks like it since you've
got iterations for the shift opcode and corresponding adjustment to the field,
but they currently only handle rshift/plus.
Hmm do left shifts work here? Since a left shift would increase the size of the
resulting value by adding zeros to the end of the number, so you can't increase
the size of the bitfield to do the same.
Dunno, I hadn't really thought about it. It just looked like you were
prepared to handle more cases with those iterators.
I did however realize that truncating casts have the same effect as a right
shift,
so I have added that now.
ACK.
jeff