Looks like `SimplifyExpr` doesn't support folding `bias_add` and `bias`, see 
https://github.com/apache/tvm/blob/6942b3660df3551a3a9a86c2faba834d366a2a7e/src/relay/transforms/simplify_expr.cc#L651-L652.
 So both of cases don't work unless you modify that pass. But I recommend not 
depending on `bias_add` as explained below.

[quote="aakaverm-quic, post:9, topic:12391"]
As I mentioned in my original question I would need to preserve the conv2d and 
bias_add ops after batchnorm fold. It is more of a pattern matching requirement 
rather than an optimization one.
[/quote]

I highly suggest modifying your pattern to support both `bias_add` and `add` as 
in 
https://github.com/apache/tvm/blob/7fd73b2663ae33d341ab09834f215285eb9bd136/python/tvm/relay/op/contrib/cutlass.py#L45

Frontends are not consistent in which ops to generate for bias addition.

And I recommend experimenting with simple test cases like 
https://github.com/apache/tvm/blob/ac6607282e080dc15cce7d9cf565f5d390ba0f16/tests/python/relay/test_pass_fold_constant.py#L305
 rather than the whole resnet50 from the beginning.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/11) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/7962288d6c528b68e4720ba60b7c48b30b464c26528e4b3e8b1f413306aba6e7).

Reply via email to