[TVM Discuss] [Development] Missing tflite operators

2019-07-01 Thread Ramana Radhakrishnan via TVM Discuss
Hi Alexander, Thanks for your response. Ah just saw the support for REDUCE_MAX. Let me investigate why this is failing for us with operator unsupported again. Sorry no our models aren't open sourced. Would you know of any tools like creduce to create smaller models that could be used as test

[TVM Discuss] [Development] [ONNX] Missing and outdated operators

2019-07-01 Thread Loïc Cordone via TVM Discuss
Hello everyone, I'm trying to compile the official [ONNX SSD model](https://github.com/onnx/models/tree/master/ssd), but I got missing operators: NonMaxSuppression and TopK (even though they exist in Relay). I tried to add them in tvm/python/tvm/relay/frontend/onnx.py, but now I get an error

[DISCUSS][LEGAL] Licensing Question from Qualcomm

2019-07-01 Thread Tianqi Chen
Dear Legal: I am writing this email to ask a question about the licensing situation in Apache TVM(incubating) community. Background, Qualcomm wants to contribute to the TVM repo. However, because TVM repo's subfolder vta contains software declarations about open source accelerator design(which t

Re: [dmlc/tvm] [Relay][RFC] Garbage Collection (#3423)

2019-07-01 Thread Jared Roesch
I talked with Zach DeVito from PyTorch team for a while about RefCounting, there are quite a few benefits to using reference counting. We should probably just use weak refs, solutions from Counting Immutable Beans (a recent paper by my MSR collaborator where they do much better than GC languages

Re: [DISCUSS][LEGAL] Licensing Question from Qualcomm

2019-07-01 Thread Hen
On Mon, Jul 1, 2019 at 10:08 AM Tianqi Chen wrote: > Dear Legal: > > I am writing this email to ask a question about the licensing situation in > Apache TVM(incubating) community. > > Background, Qualcomm wants to contribute to the TVM repo. > > However, because TVM repo's subfolder vta contains

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-01 Thread Tianqi Chen
OK, seems we are converging to qnn. Perhaps we could propose the list of op names -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2351#issuecomment-507454173

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-01 Thread Animesh Jain
Finally, we are starting to converge :) I am proposing them on the basis of Resnet network for now. `relay.op.qnn.conv2d` `relay.op.qnn.dense` `relay.op.qnn.relu` `relay.op.qnn.max_pool2d` `relay.op.qnn.avg_pool2d` `relay.op.qnn.concat` (used in Inception) `relay.op.qnn.quantize` `relay.op.qnn.d

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-01 Thread Animesh Jain
All of above `qnn` ops will be lowered to existing Relay primitive ops using some Relay pass (for example, using ForwardRewrite infra). For example - `relay.op.qnn.conv2d` can be lowered to ~~~ fn (%quantized_data: Tensor[(2, 1, 2, 4), uint8], %weight: Tensor[(3, 1, 2, 2), uint8]) -> Tensor[(2

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-01 Thread 黎明灰烬
> I have yet to understand what needs to be done with softmax. Maybe computing softmax in float as it seems that we are not expecting everything in integer (just like your conv2d lowering proposal)? > -- You are receiving this because you are subscribed to this thread. Reply to this email direc

[TVM Discuss] [RFC] Introducing Hexagon backend

2019-07-01 Thread Lan Tn via TVM Discuss
My current problem is that I don't see special X86 or ARM support in the runtime's code. --- [Visit Topic](https://discuss.tvm.ai/t/introducing-hexagon-backend/2421/17) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click her