On 18/05/16 02:06, Joseph Myers wrote:
On Tue, 17 May 2016, Matthew Wahab wrote:

In some tests, there are unavoidable differences in precision when calculating
the actual and the expected results of an FP16 operation. A new support function
CHECK_FP_BIAS is used so that these tests can check for an acceptable margin of
error. In these tests, the tolerance is given as the absolute integer difference
between the bitvectors of the expected and the actual results.

As far as I can see, CHECK_FP_BIAS is only used in the following patch, but 
there
 is another bias test in vsqrth_f16_1.c in this patch.

This is my mistake, the CHECK_FP_BIAS is used for the NEON tests and should 
have gone
into that patch. The VFP test can do a simpler check so doesn't need the macro.

Could you clarify where the "unavoidable differences in precision" come from? 
Are
the results of some of the new instructions not fully specified, only specified
within a given precision?  (As far as I can tell the existing v8 instructions 
for
reciprocal and reciprocal square root estimates do have fully defined results,
despite being loosely described as esimtates.)

The expected results in the new tests are represented as expressions whose 
value is
expected to be calculated at compile-time. This makes the tests more readable but differences in the precision between the the compiler and the HW calculations mean that for vrecpe_f16, vrecps_f16, vrsqrts_f16 and vsqrth_f16_1.c the expected and actual results are different.

On reflection, it may be better to remove the CHECK_FP_BIAS macro and, for the tests that needed it, to drop the compiler calculation and just use the expected hexadecimal value.

Other tests depending on compiler-time calculations involve relatively simple arithmetic operations and it's not clear if they are susceptible to the same rounding errors. I have limited knowledge in FP arithmetic though so I'll look into this.

Matthew

Reply via email to