@matt-arm the PR you posted solved the issue of tuple constant propagation, but
it seems not solving the tuple var node issue. In this particular case, for
example, we will still have a tuple of data in the first argument.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subg
I have a candidate fix with this PR:
https://github.com/apache/incubator-tvm/pull/5476
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
Hi, welcome to the forum :) I'm working on fixing this exact issue at the
moment. It comes about because constant tuples are not correctly propagated
into the partitioned regions so you can't see the data of the tuple, only its
type. I hope to have a fix in review either later today or tomorro
I will have a try. Thanks very much.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/e
@matt-arm : you may find this interesting.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tv
At this moment we suggest your codegen flatting a tuple here:
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L141
In addition, when processing concatenate nodes, your codegen can retrieve the
tuple information when processing concatenate nodes by
Now, I want to use BYOC to run SSD-ResNet34 model and I met some problems.
About the "concatenate" operator, if it is a subgraph, the partitioned graph is:
def @ssdnn_0(%ssdnn_0_i0: (Tensor[(64, 4, 5776), float32], Tensor[(64, 4,
2166), float32], Tensor[(64, 4, 600), float32], Tensor[(64,