Thank you for bringing up this great proposal! Like @yangjunpro mentioned, we at Alibaba, along with @lfengad @dingli have move along this direction for quite some time now. We choose GraphDef->HLO->Relay for the exact same reasons, that it is a stable and concise IR to work on. Glad to be looking at a concrete analysis table on discrepancy on op sets of different IR!
The work we have done is to use XLA framework to cluster a subgraph of tensorflow and use TVM as codegen, the rest remain executed by Tensorflow itself. We now have a demo running with resnet18 and mlp (where most of network can be clustered to tvm part). We made this choice to fully utilize the extensive op set of Tensorflow Graph that allow flexible neural network construction, while for those TVM can do better in code generation, we offload them to TVM for better performance and our demo is showing promising results. MLIR is definitely on our road map to be the future IR infra, one of the reason is to leverage `"a hybrid TF / HLO combo made possible under the unified IR infra."` as mentioned by @tqchen. Also dynamic shape is a true problem for HLO IR, that we hope HLO dialect in MLIR is helpful on that regard, and we have many colleagues working on that as well. One concern is still about node coverage. While relay op set is growing, we still roughly see > Relay < HLO < TF Graph The unsupported part can be codegened by MLIR as @tqchen discussed (in that case TF/HLO combo conversion becomes even more important as TF Graph is not necesarily covered by HLO), or the other way around like what we did to reuse tensorflow runtime. But at this stage, we are more than happy to have an MLIR dialect/frontend for relay and we anticipate to exchange our thoughts and experience on this. --- [Visit Topic](https://discuss.tvm.ai/t/rfc-mlir-frontend/6473/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/e342adb329157df1b40f2e190749c8e1bb5f51169969f4d424ab23ab0a8a7cf7).