Those are all great questions @liangfu. 

Question 3 is an interesting one w.r.t to what kinds of scheduling primitives 
we'll need for sparse operators. One easy workaround is to apply vectorization 
along a dense tensor dimension if there is one. For many practical examples, 
tensors won't be sparse along all dimensions. But the question gets more tricky 
when that assumption does not hold.

And due to the dynamism of varying idx/val arrays, this can create an 
interesting discussion around how autoTVM will be used on more dynamic 
operators. Right now TOPI is built around the assumption that shapes are known 
statically. This will be an interesting implementation challenge, which 
probably deserves its own RFC. Perhaps @ZihengJiang the `Extension for 
Scheduling` can point to a follow-up RFC?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4332#issuecomment-557996282

Reply via email to