I fully agree with @tico! The WIP examination of the MISRA-C standalone runtime
would be a very interesting topic to dwelve into.
---
[Visit
Topic](https://discuss.tvm.ai/t/utvm-embedded-focus-online-meetup/6908/5) to
respond.
You are receiving this because you enabled mailing list mode.
Great! Thanks for the reply @vinx13. At the moment we will rather try to avoid
using conv2d_transpose operators if possible. If this can't work for any
reason, I must look into adding this operator to the quantizer.
---
[Visit
Topic](https://discuss.tvm.ai/t/quantization-add-support-for-c
Hi all!
I'm using TVM for post training quantization and noticed that as of now,
**conv2d_transpose** operations **can not be quantized** and fall back to
float32.
* Is there a limitation behind this or is it simply a missing feature?
* If it's a missing feature, which parts of the code would