Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-18 Thread 黎明灰烬
> We need to add `in_dtype` in the dequantize op as the calculations will be > different, especially the range to use. Guess the input tensor has such information already? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: htt

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-18 Thread shoubhik
> Thanks. Let's lay down the high-level API design for some of the quantized > operators. A large portion of this is coming from the following relevant > discussions. Thanks to @jackwish, @FrozenGene and @jnorwood for sharing their > experiences with quantization, and also @shoubhik for helping

Re: [dmlc/tvm] [Relay][RFC] VM Object and Intepreter value (#3209)

2019-06-18 Thread Haichen Shen
I'm working on a PR that separates VM Object from Interpreter value #3391. After the PR, VM will directly return Object to python instead of converting to Interpreter Value. Currently I haven't dealt with `ClosureObject` since it won't appear in the return value. -- You are receiving this beca

Re: [dmlc/tvm] [RFC] Add AVX512VNNI support for TVM (#3388)

2019-06-18 Thread Tianqi Chen
Perhaps a good time to update the CI infra to keep up with LLVM mainline, see steps in https://docs.tvm.ai/contribute/pull_request.html#ci-environment -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/t

Re: [dmlc/tvm] [RFC][Codegen] Broadcast ops with symbolic shape (#3390)

2019-06-18 Thread Yizhi Liu
Move the discussion here > The things to be improved are * Document the behavior independent of arg_binder. - Maps buffer[i][j][k] -> buffer[i][0][k] if dimension i's shape equals 0 * Debate on the name (auto broadcast?), enum vs string as type key * Discuss how would the behavior affect topi im

[dmlc/tvm] [RFC][Codegen] Broadcast ops with symbolic shape (#3390)

2019-06-18 Thread Yizhi Liu
We are trying to use TVM to generate operator definitions for MXNet. The gap is, despite the fact that TVM compute/schedule can deal with symbolic shape, some compute definition strongly rely on a fixed shape. For example, broadcast ops, ```python A = tvm.placeholder(shape=(tvm.var("a1"), tvm.v

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-18 Thread ds-jnorwood
I'm using the tensorflow tflite quantized model, mobilenet_v1_1.0_224_quant.tflite. from ` https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md ` I view it with Netron, which shows no relu6 nodes. It also shows no fused relu6 nodes in the node properties. So

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-06-18 Thread Zhao Wu
> > I think you maybe don't understand fully of my previous comment. One > > question I want to ask: Do your quantized models have conv + relu / relu6 > > like our model? If no, obviously is 0 ~ 255, no matter how many models are. > > Please see: > > https://github.com/tensorflow/tensorflow/blo