[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
This is the right way to go. However I have two concern, 1) How to fuse ops as much as possible? Basically fusion is copy propagation optimization in compilers, which is based on data flow analysis, but still lack of programming analysis in TVM now. 2) TE tensorize can not handle some complex p

[Apache TVM Discuss] [Development] Strassen Algorithm for Dense

2020-09-21 Thread zj via Apache TVM Discuss
Thank you for your reply. Regarding time-consuming fluctuations, I didn't make it clear. After autotvm tune is completed, I picked the best record for time-consuming testing, and its time-consuming fluctuates significantly.I calculate the time difference between the start and the end to get t

[Apache TVM Discuss] [Development/RFC] [RFC] Differentiable tensor expression (Create and verify backward op automatically)

2020-09-21 Thread wrongtest via Apache TVM Discuss
As there are more and more demands on TVM's training support, one of the most tedious but important work is to write backward implementation for operators. It may take great benefit if we can provide automation tools to help this process. Such tool can serve in two functionalities: - Automati

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Junru Shao via Apache TVM Discuss
@xqdan Thank you for the valuable feedback! Fusion can be done automatically with some analysis provided in Ansor. Do you have any other kind of analysis in mind that might be potentially useful? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872

[Apache TVM Discuss] [Development/RFC] [RFC] Differentiable tensor expression (Create and verify backward op automatically)

2020-09-21 Thread Junru Shao via Apache TVM Discuss
Hey @wrongtest, Thank you for the RFC! Just wondering how it compares with the previous AD RFC (https://discuss.tvm.apache.org/t/rfc-bring-in-tensor-expression-autodiff/5987) ? Thanks! --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-differentiable-tensor-expression-create-and-veri

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Tristan Konolige via Apache TVM Discuss
I've put up an initial PR here: https://github.com/apache/incubator-tvm/pull/6522. An issue has come up, what do we name the python module? ## Option 1 We name the module `tvm.tvmscript`. Example usage: ```python import tvm # Can still use this though @tvm.script # or tvm.script.tir def my_fu

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Bohan Hou via Apache TVM Discuss
No matter which option we take, do we have to discriminate between function and class when annotating with decorator? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/12) to respond. You are receiving this because you enabled mailing list mode. To unsubsc

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Tristan Konolige via Apache TVM Discuss
Yes and no. Right now we do not need to differentiate. But in the future, functions in a module may either use be for TIR or for relay. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/13) to respond. You are receiving this because you enabled mailing list

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
Is Fusion in Ansor based on tir? For other transforms, you may checkout here, that's what we've done in AKG. I can explain some if you are intrested. https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439 --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-t

[Apache TVM Discuss] [Development/RFC] [RFC] Differentiable tensor expression (Create and verify backward op automatically)

2020-09-21 Thread wrongtest via Apache TVM Discuss
Glad to see autodiff is already in progress! I think this rfc can be withdrew since this is exactly what autodiff is doing. Now I am very curious about current progress of autodiff with some questions. - If I have some common neural network structure such as resnet50 at hand, can I just use a

[Apache TVM Discuss] [Development] Strassen Algorithm for Dense

2020-09-21 Thread Zhao Wu via Apache TVM Discuss
If you want to measure it more robust, you should run it more times and calculate its average time. For example you could run 1000 times. --- [Visit Topic](https://discuss.tvm.apache.org/t/strassen-algorithm-for-dense/2661/16) to respond. You are receiving this because you enabled mailin

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Xqdan via Apache TVM Discuss
@junrushao1994 It's better to know loops can be vectoried, permutable or distributied, isl can provide these information,so we can do loop optimization and tensorization/vectorization automatically. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Junru Shao via Apache TVM Discuss
@xqdan In Ansor, Fusion analysis is handled in TE with some straightforward heuristics, which I believe have covered our usecases. CC: @merrymercy @jcf94 Agree that ISL provides effective information about vectorization, and I believe there might be other competitive heuristics too. Tensorizat

[Apache TVM Discuss] [Development/RFC] [RFC] Differentiable tensor expression (Create and verify backward op automatically)

2020-09-21 Thread Junru Shao via Apache TVM Discuss
CC: @yzhliu the major contributor of this feature --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-differentiable-tensor-expression-create-and-verify-backward-op-automatically/7960/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from thes

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Lianmin Zheng via Apache TVM Discuss
How is the compilation speed compared to the original TE? In Ansor/Autotvm, we have to compile a lot of schedules for feature extraction, so the speed of schedule transformation matters. Do you have any benchmark results? Intuitively, I think the original TE will be faster because it can do a

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Junru Shao via Apache TVM Discuss
@merrymercy I didn't get it about batched bound inference, doesn't Ansor use a pool of threads for massive bound inference? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/35) to respond. You are receiving this because you enabled mailing lis

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Chenfan via Apache TVM Discuss
E... @junrushao1994 I guess @merrymercy 's opinion is that doing analysis in TE is quicker than using the ISL. ISL is sure a powerful tool for loop analyse, but in my understanding we should lower the schedule to C code first before using ISL? Which I think is more time consuming. Currently,