Hi all!

I'm using TVM for post training quantization and noticed that as of now, 
**conv2d_transpose** operations **can not be quantized** and fall back to 
float32.

* Is there a limitation behind this or is it simply a missing feature?
* If it's a missing feature, which parts of the code would I need to modify to 
add such support?

Maybe the community experts could help to clarify these questions? @vinx13 
@janimesh or @ziheng @shoubhik I would highly appreciate your response.

Thank you & Best regards,
Robert





---
[Visit 
Topic](https://discuss.tvm.ai/t/quantization-add-support-for-conv2d-transpose/6413/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/f565151ccc039d62adc0bd0f54a50390285df0d7e369e3533365afc6e0ab83d7).

Reply via email to