And I think the just-another-pass approach implies some of the private machinery in te_compliler needs to be hoisted to be reusable for all lowering-like passes. Eg LowerTensorExprMutator. So instead of monolithic lowering + target-specific callbacks we have target-specific lowering passes + built-in lowering pass which share impl via, hopefully, conventional subclassing.
-- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/10#issuecomment-906576373