> I'm still a bit confused with this approach, specifically how one would avoid > having a separate compute definition for each workload on a new target
Indeed it is important to avoid having a separate compute definition for each workload on a new target. In this particular case, all computation definition would start with the original layout. Then there is a "schedule transformation" like transform layout which will generate the new stage as part of the scheduling process. The particular stage can be marked, which contains effectively the same information as BufferConstraint, except that it does not introduce new data structures. During global layout reflowing, such information can be used to guide the reflowing to reconstruct a data structure like `BufferConstraint` or other Layout mappings and use that to serve the same purpose. > Is there an existing annotation to indicate that a stage should be removed > entirely during lowering? Ideally we should not introduce annotation to indicate a stage should be removed, as that breaks the interface of the code itself (ideally the computation should remain the same). However, we can hint to the compiler that this particular stage is a layout transformation that should be lifted and resolved through the global constraint reflowing. Additionally, such annotation can be used to guide benchmarking, such that the overall tuning should only look at non-rewriting part(and we can leverage the transform block to generate input examples correctly). As a high level summary, the main message is to allow enough info in the TIR(as part of transform block) such that we can reconstruct a `BufferConstraint` like auxiliary data structure in global reflowing, while still making the TIR part self-contained enough so it is sufficient to construct such data structure. This also helps in cases where there are other graph-level layout rewriting(e.g. transpose) that can be fused with those additional transformation stages. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/77#issuecomment-1163019805 You are receiving this because you are subscribed to this thread. Message ID: <apache/tvm-rfcs/pull/77/c1163019...@github.com>