Yeah, I think your explanation is a good summary. I see what you mean about the 
TensorIR blocks

My understanding though is that the user doesn't actually write TensorIR 
(except maybe to start), they still schedule with a separate language? The 
blocks in TIR seem really nice, but I still worry that the scheduling code 
itself also needs some ability to abstract. For instance the example here, 
https://tvm.d2l.ai/chapter_gpu_schedules/matmul.html#blocked-matrix-multiplication-on-gpu
 . It doesn't seem like this changes that too much? There are so many axes in 
scope in this function at once, and it seems very hard to separate them all 
from each other.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/48)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/c62aca6fd42a8ac1d6a0ef90d44c1ea2dcb3324f4501e7bf60ccc08a1f5234c2).

Reply via email to