I produce a mathematical modelling library on several platforms, including iOS, 
macOS and Android, all of which use Clang. Where the hardware can generate 
floating-point traps, I prefer to run testing with traps for Invalid Operation, 
Divide-by-Zero and Overflow turned on, since that finds me problem sites more 
quickly than working backwards from "this test case failed."

However, I had a problem with Apple Clang 8.x, which I believe was LLVM 3.9, 
targeting x86-64, in that the optimiser was assuming that floating-point traps 
were turned off. This was shown, for example, by the way it hoisted 
floating-point divides above tests of the divisor that were meant to safeguard 
the divides.

After a long support case with Apple, they gave me some Clang command-line 
options for LLVM  that suppressed the problem:

-mllvm -speculate-one-expensive-inst=false
-mllvm -bonus-inst-threshold=0

I appreciate that this costs some performance, and I can accept that. These 
options worked fine for Apple Clang 9.x, whose various versions seem to have 
been based on LLVM 4.x and 5.x.

Now I've come to Apple Clang 10.0, which seems to be based on LLVM 6.0.1, and I 
have lots of floating-point traps again in optimised x86-64 code. It seems 
possible that I need some further LLVM options: does this seem plausible?

I'm not familiar with the LLVM codebase, and while I can find the source files 
that list the options I can use with -mllvm, I'd be guessing at which options 
are worth trying. Can anyone make suggestions?

Thanks very much,

--
John Dallman

-----------------
Siemens Industry Software Limited is a limited company registered in England 
and Wales.
Registered number: 3476850.
Registered office: Faraday House, Sir William Siemens Square, Frimley, Surrey, 
GU16 8QD.
_______________________________________________
cfe-users mailing list
cfe-users@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users

Reply via email to