On 09/14/2016 02:24 AM, Richard Biener wrote:
On Tue, Sep 13, 2016 at 6:15 PM, Jeff Law <l...@redhat.com> wrote:
On 09/13/2016 02:41 AM, Jakub Jelinek wrote:

On Mon, Sep 12, 2016 at 04:19:32PM +0000, Tamar Christina wrote:

This patch adds an optimized route to the fpclassify builtin
for floating point numbers which are similar to IEEE-754 in format.

The goal is to make it faster by:
1. Trying to determine the most common case first
   (e.g. the float is a Normal number) and then the
   rest. The amount of code generated at -O2 are
   about the same +/- 1 instruction, but the code
   is much better.
2. Using integer operation in the optimized path.


Is it generally preferable to use integer operations for this instead
of floating point operations?  I mean various targets have quite high
costs
of moving data in between the general purpose and floating point register
file, often it has to go through memory etc.

Bit testing/twiddling is obviously a trade-off for a non-addressable object.
I don't think there's any reasonable way to always generate the most
efficient code as it's going to depend on (for example) register allocation
behavior.

So what we're stuck doing is relying on the target costing bits to guide
this kind of thing.

I think the reason for this patch is to provide a general optimized
integer version.
And just to be clear, that's fine with me. While there are cases where bit twiddling hurts, I think bit twiddling is generally better.


I think it asks for a FP (class) propagation pass somewhere (maybe as part of
complex lowering which already has a similar "coarse" lattice -- not that I like
its implementation very much) and doing the "lowering" there.
Not a bad idea -- I wonder how much a coarse tracking of the exceptional cases would allow later optimization.


Not something that should block this patch though.
Agreed.

jeff

Reply via email to