hanhanW wrote:

I'm -1 on using `tensor.reshape` op. IMO, we should only use 
tensor.expand/collapse_shape; they work much better with existing 
transformations.

Out of curiosity, what use case do you have in mind? Why do we lower fully 
dynamic pack op? If it is at high level graph level, we can just use 
`tensor.pack` which carries more meaningful information. If it is at low level 
stage (e.g., around vectorization), I think the inner tile sizes should already 
be resolved to static values? In this context, we can still use 
`tensor.expand_shape`. It supports the case where one dynamic extent can be 
expanded into a single dynamic extent and other static extents (e.g., `? -> 
?x4`).

https://github.com/llvm/llvm-project/pull/76003
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to