Thanks for the discussion. To provide more context, the A0 approach we 
discussed is TIR-Relax layout rewriting 
https://github.com/tlc-pack/relax/issues/162 (the general idea is to lift such 
transformation in TIR scheduling into the graph, and then cancels out redundant 
intermediate transformations by either proving fusing the pair of post-compute 
and pre-compute transformations produces an identity TIR function, or use 
high-level operator semantic). I think this is very similar to  the 
[graph-level 
solution](https://discuss.tvm.apache.org/t/introducing-ty-nnp-backend-with-end2end-tensorir-integration/11807/4)
  mentioned by @wrongtest 
In general, both A0 and A1 are valid approaches. It is mainly about how we 
would like to handle the complexity in simplifications.

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/77#issuecomment-1152992143
You are receiving this because you are subscribed to this thread.

Message ID: <apache/tvm-rfcs/pull/77/c1152992...@github.com>

Reply via email to