Hi, as a context, we can see other dl-frameworks also allow that like  
https://www.tensorflow.org/api_docs/python/tf/math/floormod.

For TVM, below is where float `floormod` is lowered to normal arithmetics as `a 
- floor(a/b) * b`
https://github.com/apache/tvm/blob/7396be5645fa59cb10ae8ee14b718dbf7737390b/src/tir/transforms/lower_intrin.cc#L184-L186

Since the case in the test script is to compute floor_mod(e^c1, c2) . I think 
it would be great to check what sub-step actually cause the difference (`exp`, 
`div`, or `floor()`?).  It is known that CUDA arithmetics do not take exact 
same precision with c/c++ standard math libs, 
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#mathematical-functions-appendix.

In my environment, the cpu result match what I test with raw python:
```
>>> import math
>>> def f(a,b): return a - math.floor(a/b) * b
...
>>> f(math.exp(415.748715), 787.644532)
4.606887725612233e+164
```





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/why-relay-floor-mod-allows-float64/12073/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/78e2e6be1aed6944c5a313bc7958fd5574c06be37de77290580fbea871a7a467).

Reply via email to