A significant driver of progress in deep learning has been advances in
computational resources. While those resources are often limited, the is a
trend to replace dense computation in DNN with sparse computation for speeding
up / saving memory to enable larger models. For example: [neural network
pruning](https://github.com/he-y/Awesome-Pruning), [sparse
transformer](https://openai.com/blog/sparse-transformer/). Also some new
workloads like GNN relies on sparse support. It would be great if TVM can
represent sparse computation workload.
There exists some sparse support in TVM already. Overall, it use the dense
tensors to describe the sparse CSR/BSR tensors based on the existing Tensor DSL
like here
https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/nn/sparse.py.
However, this approach have some obvious drawbacks:
- it is quite tedious to describe the sparse computation, while you need to
deal with the indexing manually.
- It does not provide proper abstractions for sparse kernel scheduling.
This RFC would like to discuss how to add native sparse support in TVM.
## Sparse Workloads
Here are some sparse workloads that we would like to keep in mind and take into
consideration during design.
- **Graph Neural Networks**: GNN is a type of Neural Network which directly
operates on the graph structure, which has gained increasing popularity in
various domains, including social network, knowledge graph, recommender system,
and even life science. The graph data are often sparse, so that there exist
urgent demand that optimizing sparse kernels for GNN workloads, like: sparse
matrix-matrix multiplication (SPMM), Sampled dense-dense matrix product
(SDDMM). segment_sum, segment_min, segment_max, segment_mm, etc.
- **Block Sparse**: Even though sparse operations need less compute and memory
relative to their dense counterparts, the speed-up observed by using sparse
operations is less than expected on different hardware platforms. The block
sparse representation (BSR) would be more friendly for hardwares and easier to
be optimized. There also exist some works to induce block sparsity in
RNNs/Transformer by pruning blocks of weights.
>From the above workloads, we can summary some requirements that our sparse
>support need to achieve:
- It should be able to represent *common sparse formats*: CSR, RSR, RSR, etc.
- Although most workloads are focused on 2D sparse matrics, but it would be
better that if it can represent *multiple dimension tensor* so that fit with
the original TVM Tensor abstraction.
After some investigation, we found that the tree hierarchy representation used
by TACO and ExTensor is a good candidate.
## The Tree Hierarchy Representation
The tree hierachy representation can represent tensors of any order, by
constructing formats from a bounded number of primitives, e.g., specifying
whether each dimension is dense of sparse. (TACO also supports many other types
like *range*, *hash*, etc. but we can expand it in the future depends on the
demand.) With this approach, a CSR matrix can be represented as
`SparseTensor([Dense, Sparse])`, RSR as `SparseTensor([Sparse, Dense])`, BSR as
`SparseTensor([Dense, Dense, Sparse, Sparse])`.
We can found that a general/sparse tensor is actually composed by several dense
arrays with the tree hierarchy representation:
- An array `A_val` is used to represent the non-zero elements of tensor A.
- For every dense axis: an integer `Ai_size` is used to represent the size of
tensor A's i-th dimension.
- For every sparse axis: two index arrays, `Ai_pos` and `Ai_idx`, together form
a segmented vector with one segment per entry in the previous dimension (parent
node in the tree). The `Ai_idx` array stores all the non-zero indices in the
dimension, while the `Ai_pos` array stores the location in the idx array where
each segment begins.
### Understanding the Representation with Examples
Here we will show with a 2D case to understand how the sparse tensor is
represented under different formats:
```
example tensor:
[
a, 0, b, c,
0, 0, 0, 0,
d, 0, 0, e,
]
```
```
Format:
[Dense, Dense]
Storage:
axis 0
A0_size = 3
axis 1
A1_size = 4
values of A
A_val = [a, 0, b, c, 0, 0, 0, 0, d, 0, 0, e]
Access:
produce B {
for (i, 0, m) {
for (j, 0, n) {
B[((i*n) + j)] = A[((i*n) + j)]
}
}
}
```
```
Format:
[Dense, Sparse]
Storage:
axis 0
A0_size = 3
axis 1
A1_pos = [0, 2, 2, 5]
A1_idx = [0, 3, 0, 2, 3]
values of A
A_val = [a, b, c, d, e]
Access:
for (i, 0, A0_size) {
for (j, A1_pos[i], A1_pos[i+1]) {
idx = {i, A1_idx[j]}
val = A_vals[j];
}
}
```
```
Format:
[Sparse, Dense]
Storage:
axis 0
A0_pos = [0, 2]
A0_idx = [0, 2]
axis 1
A1_size = 4
A_val = [a, 0, b, c, 0, 0, 0, 0, d, 0, 0, e]
Access:
for (i, A0_pos[0], A0_pos[1]) {
for (j, 0, A1_size) {