Hi Alexander,
Thanks for your response. Ah just saw the support for REDUCE_MAX. Let me
investigate why this is failing for us with operator unsupported again.
Sorry no our models aren't open sourced. Would you know of any tools like
creduce to create smaller models that could be used as test
Hello everyone,
I'm trying to compile the official [ONNX SSD
model](https://github.com/onnx/models/tree/master/ssd), but I got missing
operators: NonMaxSuppression and TopK (even though they exist in Relay).
I tried to add them in tvm/python/tvm/relay/frontend/onnx.py, but now I get an
error
Dear Legal:
I am writing this email to ask a question about the licensing situation in
Apache TVM(incubating) community.
Background, Qualcomm wants to contribute to the TVM repo.
However, because TVM repo's subfolder vta contains software declarations
about open source accelerator design(which t
I talked with Zach DeVito from PyTorch team for a while about RefCounting,
there are quite a few benefits to using reference counting. We should probably
just use weak refs, solutions from Counting Immutable Beans (a recent paper by
my MSR collaborator where they do much better than GC languages
On Mon, Jul 1, 2019 at 10:08 AM Tianqi Chen
wrote:
> Dear Legal:
>
> I am writing this email to ask a question about the licensing situation in
> Apache TVM(incubating) community.
>
> Background, Qualcomm wants to contribute to the TVM repo.
>
> However, because TVM repo's subfolder vta contains
OK, seems we are converging to qnn. Perhaps we could propose the list of op
names
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2351#issuecomment-507454173
Finally, we are starting to converge :)
I am proposing them on the basis of Resnet network for now.
`relay.op.qnn.conv2d`
`relay.op.qnn.dense`
`relay.op.qnn.relu`
`relay.op.qnn.max_pool2d`
`relay.op.qnn.avg_pool2d`
`relay.op.qnn.concat` (used in Inception)
`relay.op.qnn.quantize`
`relay.op.qnn.d
All of above `qnn` ops will be lowered to existing Relay primitive ops using
some Relay pass (for example, using ForwardRewrite infra). For example -
`relay.op.qnn.conv2d` can be lowered to
~~~
fn (%quantized_data: Tensor[(2, 1, 2, 4), uint8], %weight: Tensor[(3, 1, 2, 2),
uint8]) -> Tensor[(2
> I have yet to understand what needs to be done with softmax.
Maybe computing softmax in float as it seems that we are not expecting
everything in integer (just like your conv2d lowering proposal)?
>
--
You are receiving this because you are subscribed to this thread.
Reply to this email direc
My current problem is that I don't see special X86 or ARM support in the
runtime's code.
---
[Visit Topic](https://discuss.tvm.ai/t/introducing-hexagon-backend/2421/17) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
her
10 matches
Mail list logo