Hi @anijain2305 thanks for the reply. I should've made myself clear. What I 
meant was if the model(weight and bias) was quantized to uint8, does TVM has a 
way to convert the uint8 weight and bias to int8 weight and bias?

I will certainly try what you suggested, thank you.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/support-for-pre-quantized-model-int8-uint8-conversion/8064/3)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/d256b7082786cfcd651c6b48be03d8c8b16df5b90f570d3358fdcb36144b12de).

Reply via email to