On Wed, Mar 22, 2023 at 4:57 PM Andrew Stubbs wrote:
>
> On 22/03/2023 13:56, Richard Biener wrote:
> >> Basically, the -ffast-math instructions will always be the fastest way,
> >> but the goal is that the default optimization shouldn't just disable
> >> vectorization entirely for any loop that h
On 22/03/2023 13:56, Richard Biener wrote:
Basically, the -ffast-math instructions will always be the fastest way,
but the goal is that the default optimization shouldn't just disable
vectorization entirely for any loop that has a divide in it.
We try to express division as multiplication, but
On Wed, Mar 22, 2023 at 12:02 PM Andrew Stubbs wrote:
>
> On 22/03/2023 10:09, Richard Biener wrote:
> > On Tue, Mar 21, 2023 at 6:00 PM Andrew Stubbs wrote:
> >>
> >> Hi all,
> >>
> >> I want to be able to vectorize divide operators (softfp and integer),
> >> but amdgcn only has hardware instruc
On 22/03/2023 10:09, Richard Biener wrote:
On Tue, Mar 21, 2023 at 6:00 PM Andrew Stubbs wrote:
Hi all,
I want to be able to vectorize divide operators (softfp and integer),
but amdgcn only has hardware instructions suitable for -ffast-math.
We have recently implemented vector versions of al
On Tue, Mar 21, 2023 at 6:00 PM Andrew Stubbs wrote:
>
> Hi all,
>
> I want to be able to vectorize divide operators (softfp and integer),
> but amdgcn only has hardware instructions suitable for -ffast-math.
>
> We have recently implemented vector versions of all the libm functions,
> but the lib
Hi all,
I want to be able to vectorize divide operators (softfp and integer),
but amdgcn only has hardware instructions suitable for -ffast-math.
We have recently implemented vector versions of all the libm functions,
but the libgcc functions aren't builtins and therefore don't use those
hoo