[TVM Discuss] [Development] Missing tflite operators

2020-06-17 Thread Michael Ng via TVM Discuss
I have found the solution in here: https://github.com/google/mediapipe/issues/245 --- [Visit Topic](https://discuss.tvm.ai/t/missing-tflite-operators/3150/22) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https:/

[TVM Discuss] [Development] Missing tflite operators

2020-06-17 Thread Michael Ng via TVM Discuss
@apivovarov Have you successfully compiled palm_detect.tflite model? I am trying the same thing and hit Custom operators are currently not supported error. Really appreciate if you can help me. --- [Visit Topic](https://discuss.tvm.ai/t/missing-tflite-operators/3150/21) to respond. You

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread tqchen via TVM Discuss
Yes that is what I mean. Bringing mechanism to ansor so that we can allow user to take full control or half control of the search space when necessaries --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/11) to respond. You are receiving this

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Zhao Wu via TVM Discuss
We do support to generate OpenCL, so we could run on Mali GPU. However, we don't test it on Mali GPU when we complete Ansor. Some difference compared with Nvidia GPU we could see, for example, on Mali GPU, we shouldn't use `cache_read("shared")` because Mali GPU doesn't have separate shared me

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Yao Wang via TVM Discuss
IMO, AutoTVM + schedule template system represents a methodology which developer can create and fully control their own kernel optimization, which is functionally disjoint with Ansor. If deprecating AutoTVM means we will not discard any core functionalities but just unify them under a larger p

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread tqchen via TVM Discuss
While using ansor to deprecate AutoTVM is a strong statement, I think it is a goal we should strive to achieve. I do not think it will replace the op strategy though, since we need strategy for the graph level selection and dispatching. In particular, I would encourage us to think toward that

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread yidawang via TVM Discuss
I think framing Ansor as AutoTVM v2.0 is somewhat misleading as Ansor is taking a totally different approach from AutoTVM. Also, I feel that planning to have Ansor deprecate autotvm is a too strong statement. I will expect Ansor and AutoTVM (along with the existing schedules in TOPI) co-exist

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Matt Barrett via TVM Discuss
Thanks for this RFC, it looks awesome! I've had a quick read through the paper but I think it will take me some time to understand the details. Just a few initial questions: * Do you see this replacing most or all of the scheduling that's currently in TOPI eventually? * Will there be a way t

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread masahi via TVM Discuss
Also looking forward to seeing performance on quantized models and comparison against TFLite, FBGEMM and MKLDNN. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/5) to respond. You are receiving this because you enabled mailing list mode.

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Yao Wang via TVM Discuss
Also I think a benchmark to cover more models on more platforms would be necessary if we want to replace major part in the system. In addition, we can probably consider different methods of codegen in tvm as baseline. One example is that currently we use TVM+MKLDNN for bert on x86 cpu since x8

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Yao Wang via TVM Discuss
Thanks @merrymercy for this work! I have several questions regarding to this plan: 1. In Ansor paper there are relative performance number between current autotvm and ansor, it would be great to have some benchmark data in terms of latency. For example, for resnet50_v1 we can achieve 2.2 ms on

[TVM Discuss] [Development/RFC] [RFC] TVM Target Specification

2020-06-17 Thread Junru Shao via TVM Discuss
Let's do it in this thread. I am working on the target id registry, but was curious about people's option about one naming: "add_attr_option" vs "add_config_option". In the RFC, to configure the schema of a target id, we allow using the syntax below: ``` TVM_REGISTER_TARGET_ID("llvm") .add_a

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread tqchen via TVM Discuss
Thanks @merrymercy can you also post a rough system diagram of components as well as an example API for example usages? --- [Visit Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/2) to respond. You are receiving this because you enabled mailing list

[TVM Discuss] [Development/RFC] [RFC] TVM Target Specification

2020-06-17 Thread tqchen via TVM Discuss
@junrushao1994 how about we list the proposal options and we see what do everyone think? we can do it in this thread or in a separate thread --- [Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/35) to respond. You are receiving this because you enabled mailing list

[TVM Discuss] [Development/RFC] [RFC] TVM Target Specification

2020-06-17 Thread Junru Shao via TVM Discuss
@tqchen Just a minor naming issue: Which one do you prefer? `.add_attr_option` or `.add_config_option` --- [Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844/34) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emai

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Lianmin Zheng via TVM Discuss
# Motivation The current autotvm requires pre-defined schedule templates. This makes autotvm only semi-automated: the search is automated, but the search space has to be manually defined by developers using the schedule templates. This approach has several drawbacks: 1. The templates are har

Re: [apache/incubator-tvm] [RFC] Improve quantized convolution performance for armv8 architectures (#5754)

2020-06-17 Thread Zhao Wu
> @FrozenGene Can you please review when you get time? Yep. I could review it tomorrow. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/5754#issuecomment-645406262

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-06-17 Thread masahi via TVM Discuss
Sure that sounds good. Since params are referenced by its numeric node id in Torch IR and we line-by-line translate, we still need association of numeric ID -> state_dict key name. state_dict key name is available here as "full_attr" so you can use this name when creating Var. https://github

[TVM Discuss] [Development] [PyTorch] [Frontend] graph input names can change using loaded torchscript

2020-06-17 Thread Thomas V via TVM Discuss
While we're at the topic of names: The params currently are just numbered. I must admit I'd think it'd be prettier if we used the state_dict names instead. What do you think? --- [Visit Topic](https://discuss.tvm.ai/t/pytorch-frontend-graph-input-names-can-change-using-loaded-torchscript/6

[TVM Discuss] [Development] VTA First Conv Layer Optimize

2020-06-17 Thread Augusto Capone via TVM Discuss
Hi @hjiang, I use Sony's framework [NNabla](https://github.com/sony/nnabla) to train the networks but I then convert them to ONNX or Tensorflow in order to use them with TVM. Accuracy loss gets around 4%. Regards Augusto --- [Visit Topic](https://discuss.tvm.ai/t/vta-first-conv-layer-op