On Tue, 17 Dec 2024 16:39:58 GMT, Emanuel Peter <epe...@openjdk.org> wrote:

>> This is the core idealization logic which infers FP16 IR. Every test point 
>> added in the test points added in 
>> test/hotspot/jtreg/compiler/c2/irTests/TestFloat16ScalarOperations.java 
>> verifies this.
>
> Picking a random line from `testAddConstantFolding()`
> ` assertResult(add(Float16.POSITIVE_INFINITY, 
> Float16.POSITIVE_INFINITY).floatValue(), Float.POSITIVE_INFINITY, 
> "testAddConstantFolding");`
> 
> So this seems to do a FP16 -> FP16 add, then convert to float, so I don't 
> immediately see the FP16 -> Float -> FP16 conversion.
> 
> Ah, how do we intrinsify this?
> 
>     public static Float16 add(Float16 addend, Float16 augend) {
>         return valueOf(addend.floatValue() + augend.floatValue());
>     }
> 
> Is it not the `add` that is intfinsified, but the `valueOf`, `floatValue` and 
> Float `+`?
> 
> Why not intrinsify the `Float16.add` directly?

In above case, we infer the FP16 addition through pattern matching. 
ConvF2HF (AddF (ConvF2HF addened)   (ConvF2HF augend)) =>   AddHF 
(ReinterpretS2HF addened) (ReinterpretS2HF augend)

The idea here is to catch the frequently occurring patterns in the graph rather 
than intrensifying at the function level.

-------------

PR Review Comment: https://git.openjdk.org/jdk/pull/22754#discussion_r1895315441

Reply via email to