> Hey @cbalint13 thanks for asking! Absolutely!
> 

@junrushao1994 

First, thanks a lot for your time !

- I am very happy even just to witness what is going on recently in TVM (on 
mind blowing pace).

> > Was Auto Tensorization removed form this list (was at section [M4b] if I 
> > recall), what was/is the plan with ?
> 
> The only reason is that I'm trying to organize the roadmap. Auto 
> tensorization is a huge item and we want to have a separate tracking issue 
> for it. As you already see, we have been upstreaming auto 
> tensorization-related PRs, including #9871 #10066. [My 
> branch](https://github.com/junrushao1994/tvm/tree/meta-schedule) also 
> contains auto tensorization-related working examples if you want to try them 
> out now :-)

* I see now, thanks for clarification, noticed the recent "blockize - 
tensorize" PR (quite a large piece, still diving on it).

> 
> > Also regarding of design plan, will/have something in common with 
> > principles of https://arxiv.org/abs/2101.08458?
> 
> This work is done by my fellow colleagues, and of course we are aware, and we 
> have a lot in common :-) Their codebase is public 
> [here](https://github.com/were/unit). The difference here is that we are now 
> using TensorIR, a more powerful and systematic IR/scheduling system to 
> support tensorization

 * Was familiar that code-base for [UNIT](https://github.com/were/unit), it is 
good to know that such feature will make it into the new TIR.
 * I am thinking on framework (early [public 
sketch](https://github.com/cbalint13/OLIMP)) that emits HDL (verilog) blocks 
reusable and/or as cpu-isa extensions in many possible forms sampled within 
some combinatorial search-space and auto-tensorisation would be key process in 
evaluation and metrics here.
 * It may end sampling some very wierd-looking hardware (including systolic 
blocks) so auto-tensorizer might need enhancement on some more challenging ends 
(as i already looked at UNIT).

Can't wait to try it, will look into mentioned WiP early branch.

Many thanks again !



-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8473#issuecomment-1022527520
You are receiving this because you are subscribed to this thread.

Message ID: <apache/tvm/issues/8473/1022527...@github.com>

Reply via email to