[Apache TVM Discuss] [Questions] Optimising pre-trained models with custom layer implementation

2022-03-14 Thread Akshay via Apache TVM Discuss
Hi, I am completely new to TVM. 1.) One way of optimisation is to convert the whole pre trained model's graph in to an optimised one through TVM.([Like here on TVM tutorial ](https://tvm.apache.org/docs/how_to/compile_models/from_pytorch.html#sphx-glr-how-to-compile-models-from-pytorch-py) 2.

[Apache TVM Discuss] [Questions] Creating own template for Conv1D on ARM CPUs

2022-03-14 Thread bvoelker via Apache TVM Discuss
Hey everyone, I am currently working on a project, where I need to optimize conv1D-Layers on ARM CPUs. I followed [this](https://tvm.apache.org/docs/how_to/tune_with_autotvm/tune_relay_arm.html#sphx-glr-how-to-tune-with-autotvm-tune-relay-arm-py) guide for autotuning a generic model on ARM C

[Apache TVM Discuss] [Questions] Hierarchy in TVM

2022-03-14 Thread Aleks Knezevic via Apache TVM Discuss
Hey all, I've been working with the TVM stack lately, and love it! Does the TVM stack support a concept of hierarchy? That is, when compiling a model with repeating operations (i.e. BERT) is there any way to extract the fact that there are 12 identical layers, and which operators belong to

[Apache TVM Discuss] [Questions] Hierarchy in TVM

2022-03-14 Thread Cody H. Yu via Apache TVM Discuss
This is an interesting question and I'm looking into this recently too. It depends on how the model was implemented. If the model was implemented in other frameworks (e.g., TensorFlow, PyTorch, etc), then there's no way for TVM to keep this information, because this hierarchy doesn't a part of

[Apache TVM Discuss] [Questions] [Auto-schedule] extract_tasks not extract any task from function write in relay

2022-03-14 Thread Seriushwa via Apache TVM Discuss
Hi, I copied part of relay dump from mobilenet and reproduce with python script below > data = relay.var("56", shape=[1, 16, 16, 512], dtype="uint8") > kernel = relay.var("_param_39", shape=[3, 3, 512, 1], dtype="uint8") > bias = relay.var("_param_40", shape=[512], dtype="int32") > > #defin