Hi, I am completely new to TVM.
1.) One way of optimisation is to convert the whole pre trained model's graph
in to an optimised one through TVM.([Like here on TVM tutorial
](https://tvm.apache.org/docs/how_to/compile_models/from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py)
2.
Hey everyone,
I am currently working on a project, where I need to optimize conv1D-Layers on
ARM CPUs.
I followed
[this](https://tvm.apache.org/docs/how_to/tune_with_autotvm/tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py)
guide for autotuning a generic model on ARM C
Hey all,
I've been working with the TVM stack lately, and love it!
Does the TVM stack support a concept of hierarchy? That is, when compiling a
model with repeating operations (i.e. BERT) is there any way to extract the
fact that there are 12 identical layers, and which operators belong to
This is an interesting question and I'm looking into this recently too.
It depends on how the model was implemented. If the model was implemented in
other frameworks (e.g., TensorFlow, PyTorch, etc), then there's no way for TVM
to keep this information, because this hierarchy doesn't a part of
Hi,
I copied part of relay dump from mobilenet and reproduce with python script
below
> data = relay.var("56", shape=[1, 16, 16, 512], dtype="uint8")
> kernel = relay.var("_param_39", shape=[3, 3, 512, 1], dtype="uint8")
> bias = relay.var("_param_40", shape=[512], dtype="int32")
>
> #defin