Thanks for the nice RFC.
And happy to see folks other than us also pay attention to the MLIR-as-a-bridge 
design to integrate TVM as a backend for TensorFlow(or maybe more than 
TensorFlow^-^).

Inside Alibaba, we are also working on the related things. 

To be more specific, for static shape JIT compilation scenario, we heavily 
customize XLA, such as adding aggressive optimization to tease performance, 
enhancing its infrastructures to make sure it can be turned on by default for 
lots of production workloads. Also some of our colleagues already implemented 
an internal version for integrating TVM as a backend of XLA:). 

For dynamic shape, we do leverage MLIR since in our opinions it has native 
support for dynamic shape(some of our thoughts haven been reflected in 
[this](https://drive.google.com/open?id=1ZDzXluB2uVc35r1fBNK5jW6rY8s82pc_) MLIR 
ODM), better modularized design philosophy and enable use to integrate 
different pieces of our AI compiler in a unified approach. 
It would be more than happy to share some of our ongoing work and to see 
whether there is potential collaborations with you folks.

Thanks





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-mlir-frontend/6473/2) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/abc9c8a099881eb48799eb4f8e9a615e0aed1889024b9173903bfa3c78c70e7e).

Reply via email to