Thanks @ZihengJiang for bringing up the RFC, especially the in-depth thinking 
to bring the representation in TACO.
I think we shall also address some detailed issues to deal with sparse tensors.

1. How shall we implement `SparsePlaceholder` with varying length in the `idx` 
and `val` variables?
2. As TVM current `ComputeOp` don't support computation upon varying length 
vectors, how do we bring this into `SparseComputeOp`?
3. Have you consider how to perform `Vectorize` and `Tensorize` operation upon 
sparse tensors? The lowering steps would be very much different with dense 
tensors, and we might need to maintain `masks` for performing `Vectorize`, 
along with varying length vectors like `indices` and `values`.
4. Do we support automatic sparsity regularization in this RFC, or just 
inference with existing sparse tensors? If the answer is we only support 
inference, how shall we import exiting sparse tensor in existing frameworks to 
demonstrate the capability?
5. Which layout shall we start with? NHWC or NCHW ?
6. I think we should preserve the `SparseTensor` operators to be easily 
quantized with existing code base, at least with small modifications.

Another challenge we also have to consider is that it's hard to introduce 
sparsity into depth-wise convolution operators, while depth-wise convolution is 
very useful in modern neural networks. It will be very challenging to work on 
**sparse** depth-wise convolution.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4332#issuecomment-557969198

Reply via email to