I have found the solution in here:
https://github.com/google/mediapipe/issues/245
---
[Visit Topic](https://discuss.tvm.ai/t/missing-tflite-operators/3150/22) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https:/
@apivovarov Have you successfully compiled palm_detect.tflite model? I am
trying the same thing and hit Custom operators are currently not supported
error. Really appreciate if you can help me.
---
[Visit Topic](https://discuss.tvm.ai/t/missing-tflite-operators/3150/21) to
respond.
You
Yes that is what I mean. Bringing mechanism to ansor so that we can allow user
to take full control or half control of the search space when necessaries
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/11)
to respond.
You are receiving this
We do support to generate OpenCL, so we could run on Mali GPU. However, we
don't test it on Mali GPU when we complete Ansor. Some difference compared with
Nvidia GPU we could see, for example, on Mali GPU, we shouldn't use
`cache_read("shared")` because Mali GPU doesn't have separate shared me
IMO, AutoTVM + schedule template system represents a methodology which
developer can create and fully control their own kernel optimization, which is
functionally disjoint with Ansor. If deprecating AutoTVM means we will not
discard any core functionalities but just unify them under a larger p
While using ansor to deprecate AutoTVM is a strong statement, I think it is a
goal we should strive to achieve. I do not think it will replace the op
strategy though, since we need strategy for the graph level selection and
dispatching.
In particular, I would encourage us to think toward that
I think framing Ansor as AutoTVM v2.0 is somewhat misleading as Ansor is taking
a totally different approach from AutoTVM. Also, I feel that planning to have
Ansor deprecate autotvm is a too strong statement. I will expect Ansor and
AutoTVM (along with the existing schedules in TOPI) co-exist
Thanks for this RFC, it looks awesome! I've had a quick read through the paper
but I think it will take me some time to understand the details. Just a few
initial questions:
* Do you see this replacing most or all of the scheduling that's currently in
TOPI eventually?
* Will there be a way t
Also looking forward to seeing performance on quantized models and comparison
against TFLite, FBGEMM and MKLDNN.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/5)
to respond.
You are receiving this because you enabled mailing list mode.
Also I think a benchmark to cover more models on more platforms would be
necessary if we want to replace major part in the system. In addition, we can
probably consider different methods of codegen in tvm as baseline. One example
is that currently we use TVM+MKLDNN for bert on x86 cpu since x8
Thanks @merrymercy for this work! I have several questions regarding to this
plan:
1. In Ansor paper there are relative performance number between current autotvm
and ansor, it would be great to have some benchmark data in terms of latency.
For example, for resnet50_v1 we can achieve 2.2 ms on
Let's do it in this thread.
I am working on the target id registry, but was curious about people's option
about one naming: "add_attr_option" vs "add_config_option".
In the RFC, to configure the schema of a target id, we allow using the syntax
below:
```
TVM_REGISTER_TARGET_ID("llvm")
.add_a
Thanks @merrymercy can you also post a rough system diagram of components as
well as an example API for example usages?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/2)
to respond.
You are receiving this because you enabled mailing list
@junrushao1994 how about we list the proposal options and we see what do
everyone think? we can do it in this thread or in a separate thread
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/35) to
respond.
You are receiving this because you enabled mailing list
@tqchen Just a minor naming issue:
Which one do you prefer? `.add_attr_option` or `.add_config_option`
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/34) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emai
# Motivation
The current autotvm requires pre-defined schedule templates. This makes autotvm
only semi-automated: the search is automated, but the search space has to be
manually defined by developers using the schedule templates. This approach has
several drawbacks:
1. The templates are har
> @FrozenGene Can you please review when you get time?
Yep. I could review it tomorrow.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5754#issuecomment-645406262
Sure that sounds good. Since params are referenced by its numeric node id in
Torch IR and we line-by-line translate, we still need association of numeric ID
-> state_dict key name.
state_dict key name is available here as "full_attr" so you can use this name
when creating Var.
https://github
While we're at the topic of names: The params currently are just numbered. I
must admit I'd think it'd be prettier if we used the state_dict names instead.
What do you think?
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6
Hi @hjiang,
I use Sony's framework [NNabla](https://github.com/sony/nnabla) to train the
networks but I then convert them to ONNX or Tensorflow in order to use them
with TVM. Accuracy loss gets around 4%.
Regards
Augusto
---
[Visit Topic](https://discuss.tvm.ai/t/vta-first-conv-layer-op
20 matches
Mail list logo