> Due to recent situation and the current progress, we might expect a bit of
> delay in the release to June/July -- we expect the unified IR refactor to
> land by then. We will do our best to keep the timeframe.
Including more operator should be scheduled as least for one frontend? Onnx or
tens
Thanks a lot for working on this, this is going to be really impactful,
especially toward supporting NLP models. I have a couple of questions:
1. Can you please explain the shape function in a little more detail? What
exactly is its purpose? Will it have to be registered for every op?
2. Some op
Would it be easy to extend your gemm schedule into a schedule for BatchMatMul?
That would help round out the TensorCore story for matrix multiplication.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc