> We need to add `in_dtype` in the dequantize op as the calculations will be
> different, especially the range to use.
Guess the input tensor has such information already?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
htt
> Thanks. Let's lay down the high-level API design for some of the quantized
> operators. A large portion of this is coming from the following relevant
> discussions. Thanks to @jackwish, @FrozenGene and @jnorwood for sharing their
> experiences with quantization, and also @shoubhik for helping
I'm working on a PR that separates VM Object from Interpreter value #3391.
After the PR, VM will directly return Object to python instead of converting to
Interpreter Value. Currently I haven't dealt with `ClosureObject` since it
won't appear in the return value.
--
You are receiving this beca
Perhaps a good time to update the CI infra to keep up with LLVM mainline, see
steps in https://docs.tvm.ai/contribute/pull_request.html#ci-environment
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/t
Move the discussion here
> The things to be improved are
* Document the behavior independent of arg_binder.
- Maps buffer[i][j][k] -> buffer[i][0][k] if dimension i's shape equals 0
* Debate on the name (auto broadcast?), enum vs string as type key
* Discuss how would the behavior affect topi im
We are trying to use TVM to generate operator definitions for MXNet.
The gap is, despite the fact that TVM compute/schedule can deal with symbolic
shape, some compute definition strongly rely on a fixed shape. For example,
broadcast ops,
```python
A = tvm.placeholder(shape=(tvm.var("a1"), tvm.v
I'm using the tensorflow tflite quantized model,
mobilenet_v1_1.0_224_quant.tflite. from
`
https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
`
I view it with Netron, which shows no relu6 nodes. It also shows no fused
relu6 nodes in the node properties. So
> > I think you maybe don't understand fully of my previous comment. One
> > question I want to ask: Do your quantized models have conv + relu / relu6
> > like our model? If no, obviously is 0 ~ 255, no matter how many models are.
> > Please see:
> > https://github.com/tensorflow/tensorflow/blo