On Thu, 2019-04-11 at 17:00 +0100, Richard Earnshaw (lists) wrote:
>
>
> Please add _alt at the end, to distinguish from the insn above.
>
> Otherwise OK.
I added _alt, I also had to move the "0" contraint and the "r"
constraint. I had "0" with operand 3 instead of operand 1 and
that caused a
On 11/04/2019 16:21, Steve Ellcey wrote:
> On Thu, 2019-04-11 at 14:58 +, Steve Ellcey wrote:
>>
>>> You've removed the ..._noshift_alt variant. That wasn't my
>>> intention,
>>> so perhaps you misunderstood what I was trying to say.
>>>
>>> The two versions are both needed, since the register
On Thu, 2019-04-11 at 14:58 +, Steve Ellcey wrote:
>
> > You've removed the ..._noshift_alt variant. That wasn't my
> > intention,
> > so perhaps you misunderstood what I was trying to say.
> >
> > The two versions are both needed, since the register tie is not
> > orthogonal to the constant
On Thu, 2019-04-11 at 09:59 +0100, Richard Earnshaw (lists) wrote:
>
> >
> > 2018-04-10 Steve Ellcey
> >
> > PR rtl-optimization/87763
> > * config/aarch64/aarch64-protos.h
> > (aarch64_masks_and_shift_for_bfi_p):
> > New prototype.
> > * config/aarch64/aarch64.c (aarch64_mask
On 10/04/2019 21:31, Steve Ellcey wrote:
> On Wed, 2019-04-10 at 11:10 +0100, Richard Earnshaw (lists) wrote:
>>
>> OK with those changes.
>>
>> R.
>
> I made the changes you suggested and checked in the patch. Just to be
> complete, here is the final version of the patch that I checked in.
>
>
On Wed, 2019-04-10 at 11:10 +0100, Richard Earnshaw (lists) wrote:
>
> OK with those changes.
>
> R.
I made the changes you suggested and checked in the patch. Just to be
complete, here is the final version of the patch that I checked in.
2018-04-10 Steve Ellcey
PR rtl-optimization
On 01/04/2019 18:23, Steve Ellcey wrote:
> This is a ping**3 for a patch to fix one of the test failures in PR 877763.
> It fixes the gcc.target/aarch64/combine_bfi_1.c failure, but not the other
> ones.
>
> Could one of the Aarch64 maintainers take a look at it? This version of
> the patch was o
This is a ping**3 for a patch to fix one of the test failures in PR 877763.
It fixes the gcc.target/aarch64/combine_bfi_1.c failure, but not the other
ones.
Could one of the Aarch64 maintainers take a look at it? This version of
the patch was originally submitted on February 11 after incorporatin
Double ping.
Steve Ellcey
sell...@marvell.com
On Tue, 2019-02-26 at 08:44 -0800, Steve Ellcey wrote:
> Ping.
>
> Steve Ellcey
> sell...@marvell.com
>
>
> On Mon, 2019-02-11 at 10:46 -0800, Steve Ellcey wrote:
> > On Thu, 2019-02-07 at 18:13 +, Wilco Dijkstra wrote
> > >
> > > Hi Steve,
>
Ping.
Steve Ellcey
sell...@marvell.com
On Mon, 2019-02-11 at 10:46 -0800, Steve Ellcey wrote:
> On Thu, 2019-02-07 at 18:13 +, Wilco Dijkstra wrote
> >
> > Hi Steve,
> >
> > > > After special cases you could do something like t = mask2 +
> > > > (HWI_1U << shift);
> > > > return t == (t &
On Thu, 2019-02-07 at 18:13 +, Wilco Dijkstra wrote:
> External Email
>
> Hi Steve,
>
> > > After special cases you could do something like t = mask2 +
> > > (HWI_1U << shift);
> > > return t == (t & -t) to check for a valid bfi.
> >
> > I am not sure I follow this logic and my attempts to u
Hi Steve,
>> After special cases you could do something like t = mask2 + (HWI_1U <<
>> shift);
>> return t == (t & -t) to check for a valid bfi.
>
> I am not sure I follow this logic and my attempts to use this did not
> work so I kept my original code.
It's similar to the initial code in aarch6
On Tue, 2019-02-05 at 21:12 +, Wilco Dijkstra wrote:
> +bool
> +aarch64_masks_and_shift_for_bfi_p (scalar_int_mode mode,
> + unsigned HOST_WIDE_INT mask1,
> + unsigned HOST_WIDE_INT shft_amnt,
> +
Hi Steve,
Thanks for looking at this. A few comments on the patch:
+bool
+aarch64_masks_and_shift_for_bfi_p (scalar_int_mode mode,
+ unsigned HOST_WIDE_INT mask1,
+ unsigned HOST_WIDE_INT shft_amnt,
+
Ping. And adding Aarch64 Maintainers.
On Mon, 2019-01-28 at 16:11 -0800, Steve Ellcey wrote:
> On Sat, 2019-01-26 at 00:00 +0100, Jakub Jelinek wrote:
> >
> > > + /* Verify that there is no overlap in what bits are set in the
> > > two masks. */
> > > + if ((m1 + m2 + 1) != 0)
> > > +ret
On Tue, Jan 29, 2019 at 12:11:46AM +, Steve Ellcey wrote:
> > As mentioned in rs6000.md, I believe you also need a similar pattern where
> > the two ANDs are swapped, because they have the same priority.
>
> I fixed the long lines in aarch64.md and I added a second pattern for
> the *aarch64_b
On Sat, 2019-01-26 at 00:00 +0100, Jakub Jelinek wrote:
>
> > + /* Verify that there is no overlap in what bits are set in the two
> > masks. */
> > + if ((m1 + m2 + 1) != 0)
> > +return false;
>
> Wouldn't that be clearer to test
> if (m1 + m2 != HOST_WIDE_INT_1U)
> return false;
>
On Thu, Jan 24, 2019 at 11:17:45PM +, Steve Ellcey wrote:
> --- a/gcc/config/aarch64/aarch64.c
> +++ b/gcc/config/aarch64/aarch64.c
> @@ -9294,6 +9294,44 @@ aarch64_mask_and_shift_for_ubfiz_p (scalar_int_mode
> mode, rtx mask,
>& ((HOST_WIDE_INT_1U << INTVAL (shft_amnt)) - 1)) == 0
On Fri, 2019-01-25 at 10:32 +, Richard Earnshaw (lists) wrote:
>
> Do we need another variant pattern to handle the case where the
> insertion is into the top of the destination? In that case the
> immediate mask on the shifted operand is technically redundant as the
> bottom bits are known z
On 24/01/2019 23:17, Steve Ellcey wrote:
> Here is my attempt at creating a couple of new instructions to
> generate more bfi instructions on aarch64. I haven't finished
> testing this but it helps with gcc.target/aarch64/combine_bfi_1.c.
>
> Before I went any further with it I wanted to see if a
Here is my attempt at creating a couple of new instructions to
generate more bfi instructions on aarch64. I haven't finished
testing this but it helps with gcc.target/aarch64/combine_bfi_1.c.
Before I went any further with it I wanted to see if anyone
else was working on something like this and i
21 matches
Mail list logo