Thanks for hosting and the friendly time zone converter.
---
[Visit
Topic](https://discuss.tvm.ai/t/utvm-embedded-focus-online-meetup/6908/11) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/
@anijain2305 , thanks for the review! About getting rid of the legalization, I
would not do that for now. It is in my backlog to go back to this issue and try
to retrieve the strategy from the legalization pass. This should give us more
optimization options. If that turns out to be not possible,
Hi @hjiang,
I'm working to deploy a pre-quantized ResNet network with VTA in which the
first conv layer supports int8 input/weights. I think it would be an
interesting feature even though most quantization works avoid quantizing the
first layer. Both ideas are valid but it would be interesti
Hi Kevin,
Thanks for your reply.
I am trying to adapt the model to tf1, e.g., 'NonMaxSuppressionV5' in tf2 to '
NonMaxSuppressionV3', which is supported by tensorflow frontend.
The computational graph .pb file seems fine now.
However, when running
mod, params = relay.frontend.from_ten
Please join us to welcome @siju-samuel as a new committer of the TVM
community. He has been actively contributing to various frontends to the TVM
including TFLite, darknet and qnn.
- [Commits](https://github.com/apache/incubator-tvm/commits?author=siju-samuel)
- [Code
Review](https://github.co
Congratulations Siju!!! :+1:
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5817#issuecomment-644264092
Looks like some op names have changed across tf 1.x and 2.x.
---
[Visit
Topic](https://discuss.tvm.ai/t/tensorflow-frontend-support-for-tensor-frontend-new-operators/6971/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Congratulations Siju !
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5817#issuecomment-644281681
Merged #5817 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5817#event-3445991608
Different operators:
[non_max_suppression/NonMaxSuppressionV3: NonMaxSuppressionV3] in tf1 that is
supported by the current version of TVM. I looked into the source code of
tensorflow frontend and there are two operators NMS, NonMaxSuppressionV2 and
V3.
Look forward to seeing V5. :wink:
OK, to summarize, the actionable items include:
* convert default to fp32
* fix the float occurence to use fp32
@t-vi thanks for bringing up the topic, perhaps we can reopen your PR about
fp32 default change?
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-rel
Hi @acapone13,
Thanks for following up this post and nice to know you are interested in VTA
performance optimization related topic, about the resenet18 pretrained model,
could I know which framework you use to generate
the model? and how much the accuracy lost is after the quantization?
Rega
This RFC outlines a high-level roadmap towards what we might consider a
Standalone of µTVM (Micro TVM). In saying "standalone," we are referring to a
cohesive set of features that will enable a few end-user goals, one of them
being standalone execution of optimized TVM models on-device.
In th
13 matches
Mail list logo