In addition to the use cases and experience I've mentioned previously, I want to further highlight that symbolic shape support becomes even more important in these months, mainly due to the requirements of deploying decoder models (e.g., GPT). Since the text generation process is a natural dynamic shape workload in terms of sequence length and batch size, padding everything is not practical due to its inefficiency, which is already shown in latest paper publications. It is extremely important for TVM to support this case if we attempt to keep being the SOTA deep learning compiler.
-- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/89#issuecomment-1266425144 You are receiving this because you are subscribed to this thread. Message ID: <apache/tvm-rfcs/pull/89/c1266425...@github.com>