My specific comment is under the context of the TVM's unfied IR infra, to 
elaborate further.

The current TVM's new IR Infra incorporates three variants of functions in the 
same IRModule, tir::PrimFunc, relay::Function, and we will have additional 
ExternFunc for MLIR variants of that to represent an MLIR dialect functions. 
And use the same set of type system to represent the function signatures so 
that we can freely enable calls among the functions.

To enable a hybrid comobo translation, here is the ultimate TVM ingestion path 
that we can envision: start from TF GraphDef

- Step 1: GraphDef -> tvm::IRModule: translate as much as GraphDef as we can, 
including high-level control flows, the rest parts of are collapsed as several 
`tvm::ExternGraphDefFunc`. The fragments becomes relay calls into these 
tvm::ExternGraphDefFunc 
- Step 2: Lower the remaining GraphDef fragments(in tvm::ExternFunc) to HLO (as 
tvm::ExternHLOFunc)
- Step 3: Translate the rest of the HLO functions to relay

Step 1 ensures us to cover high-level important operators, as well as control 
flow constructs so that they can be natively converted without losing the 
information. Step 2-3 ensures more coverage through HLO. By combining the two, 
we will be able to get the best of both worlds.

Generalizing a bit, because we can represent different variants of functions 
under the same `tvm::IRModule`, such kind of multi-stage lowering might not 
only be relevant to TF, but also other frameworks with multi-stage IRs. The 
conventional view was to only tap into one of them, but the new design allows 
us to tap into multiple of them and gives more advantages under the 
performance/coverage tradeoffs.





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-mlir-frontend/6473/5) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/f67c1600f7d7d2b205b14a1863c13c1166c204da03b85367df48209b34d37919).

Reply via email to