This is the right way to go. However I have two concern,
1) How to fuse ops as much as possible? Basically fusion is copy propagation
optimization in compilers, which is based on data flow analysis, but still lack
of programming analysis in TVM now.
2) TE tensorize can not handle some complex p
Thank you for your reply.
Regarding time-consuming fluctuations, I didn't make it clear.
After autotvm tune is completed, I picked the best record for time-consuming
testing, and its time-consuming fluctuates significantly.I calculate the time
difference between the start and the end to get t
As there are more and more demands on TVM's training support, one of the most
tedious but important work is to write backward implementation for operators.
It may take great benefit if we can provide automation tools to help this
process. Such tool can serve in two functionalities:
- Automati
@xqdan Thank you for the valuable feedback! Fusion can be done automatically
with some analysis provided in Ansor.
Do you have any other kind of analysis in mind that might be potentially useful?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872
Hey @wrongtest,
Thank you for the RFC! Just wondering how it compares with the previous AD RFC
(https://discuss.tvm.apache.org/t/rfc-bring-in-tensor-expression-autodiff/5987)
?
Thanks!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-differentiable-tensor-expression-create-and-veri
I've put up an initial PR here:
https://github.com/apache/incubator-tvm/pull/6522.
An issue has come up, what do we name the python module?
## Option 1
We name the module `tvm.tvmscript`.
Example usage:
```python
import tvm
# Can still use this though
@tvm.script # or tvm.script.tir
def my_fu
No matter which option we take, do we have to discriminate between function and
class when annotating with decorator?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubsc
Yes and no. Right now we do not need to differentiate. But in the future,
functions in a module may either use be for TIR or for relay.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/13) to
respond.
You are receiving this because you enabled mailing list
Is Fusion in Ansor based on tir?
For other transforms, you may checkout here, that's what we've done in AKG. I
can explain some if you are intrested.
https://github.com/mindspore-ai/akg/blob/master/src/codegen/build_module.cc#L439
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-t
Glad to see autodiff is already in progress! I think this rfc can be withdrew
since this is exactly what autodiff is doing.
Now I am very curious about current progress of autodiff with some questions.
- If I have some common neural network structure such as resnet50 at hand, can
I just use a
If you want to measure it more robust, you should run it more times and
calculate its average time. For example you could run 1000 times.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/strassen-algorithm-for-dense/2661/16)
to respond.
You are receiving this because you enabled mailin
@junrushao1994 It's better to know loops can be vectoried, permutable or
distributied, isl can provide these information,so we can do loop optimization
and tensorization/vectorization automatically.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7
@xqdan In Ansor, Fusion analysis is handled in TE with some straightforward
heuristics, which I believe have covered our usecases. CC: @merrymercy @jcf94
Agree that ISL provides effective information about vectorization, and I
believe there might be other competitive heuristics too. Tensorizat
CC: @yzhliu the major contributor of this feature
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-differentiable-tensor-expression-create-and-verify-backward-op-automatically/7960/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from thes
How is the compilation speed compared to the original TE?
In Ansor/Autotvm, we have to compile a lot of schedules for feature extraction,
so the speed of schedule transformation matters.
Do you have any benchmark results? Intuitively, I think the original TE will be
faster because it can do a
@merrymercy I didn't get it about batched bound inference, doesn't Ansor use a
pool of threads for massive bound inference?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/35)
to respond.
You are receiving this because you enabled mailing lis
E... @junrushao1994 I guess @merrymercy 's opinion is that doing analysis in TE
is quicker than using the ISL.
ISL is sure a powerful tool for loop analyse, but in my understanding we should
lower the schedule to C code first before using ISL? Which I think is more time
consuming.
Currently,
17 matches
Mail list logo