Hi all!
I'm using TVM for post training quantization and noticed that as of now,
**conv2d_transpose** operations **can not be quantized** and fall back to
float32.
* Is there a limitation behind this or is it simply a missing feature?
* If it's a missing feature, which parts of the code would
Adding CoreML codegen with the BYOC feature enables us to offload subgraphs to
Apple’s Neural Engine on iOS devices. There are some approaches how to build a
CoreML model in TVM.
- A0: Build with coremltools
I think this is the most intuitive way to construct CoreML models.
coremltools pr
So we will be adding support for ONNX codegen only.
I will work on adding a codegen for ONNX and then will work on an example ONNX
runtime to demonstrate end to end functionality. I will also be improving
operator coverage for ONNX.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to
Given that there are other folks that are interested in the topic, e.g.
@smallcoscat perhaps it makes sense to land a version with reasonable coverage,
then invite others to contribute and collaborate
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/13) to respond.
You ar
Sure. That makes sense.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/14) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/df6eb7330deebf802fe628c4cb592242b