Hi,
I created a pytorch quantization model. After compiling with tvm, I did
inference. The result was inconsistent with pytorch. The strange thing is that
this phenomenon occurs sometimes.
my code:
```
import torch
from torch import nn
from torch.quantization import QuantStub, DeQuantStub, ge
I analyzed the output results. When the results are inconsistent, the inference
results of tvm are always different from the results of pytorch by a value of
scale size, so I suspect that pytorch's adaptive_avg_pool2d will round the
results, while tvm directly discards the decimal part, When r