Ah yes, in general I've noticed some schedules do not really follow the
specification of the operation. I would personally open an issue. I've noticed
this in the past.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-nn-does-relay-nn-dense-supports-multi-dimensional-input/10343/4
I opened an issue here: https://github.com/apache/tvm/issues/8412
I'll try to fix it if I have time this week.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-nn-does-relay-nn-dense-supports-multi-dimensional-input/10343/6)
to respond.
You are receiving this because you enabled
Do you have a script to recreate?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-reporter-asserteq-data-shape-data-shape-size-1-weight-shape-1-is-false-denserel-input-dimension-doesnt-match-data-shape-1-512-weight-shape-512-1000/11274/2)
to respond.
You are receiving this
IIRC Dropout isn't really supported, when a model is run it is removed here
https://github.com/apache/tvm/blob/main/src/relay/transforms/simplify_inference.cc#L201.
You can add support for dropout, there is a random number generator somewhere
in tvm.
---
[Visit
Topic](https://discuss.tvm.
[quote="Lyken17, post:1, topic:11305"]
`gradient`
[/quote]
Ah yes, I don't believe `transform.gradient` will call the simplify inference I
pointed to, but my point is that dropout has no implementation and if you have
a model with dropout you will get an error like the one you see.
Therefore
Yep, that's right.
I think this
http://tvm.apache.org/docs/dev/how_to/relay_add_op.html#hooking-up-compute-and-strategy-with-relay
will be helpful, specifically step 5.
Now you don't have a compute implementation so you need to create this.
Unfortunately I don't know a good tutorial to do
If you share the python file you ran I can try to recreate it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-reporter-asserteq-data-shape-data-shape-size-1-weight-shape-1-is-false-denserel-input-dimension-doesnt-match-data-shape-1-512-weight-shape-512-1000/11274/6)
to res
Yes that is correct. Though I believe someone was planning to work on this one
in the next week.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/toppattern-has-not-been-registered-for-nn-dropout/11305/9)
to respond.
You are receiving this because you enabled mailing list mode.
To uns
If people are suffering from this problem, please send a script/code to
recreate it. It should be easy to solve.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-reporter-asserteq-data-shape-data-shape-size-1-weight-shape-1-is-false-denserel-input-dimension-doesnt-match-data
A strategy is a combination of compute + schedule which can be used to
implement an operation efficiently. For example, for conv2d we register both a
regular conv2d compute + schedule and a winograd conv2d compute + schedule as
appropriate.
When autotuning, the list of strategies for an oper
For arr[i][j] = 1 just do `relay.ones` instead of `relay.zeros`
For arr[i][j] += 1 use `scatter_add`.
Scatter_add:
```
output[indices[i][j]][j] += updates[i][j] if axis = 0,
output[i][indices[i][j]] += updates[i][j] if axis = 1,
```
If scatter_add is insufficient, then you will need to u
If you give me the script you are running and model I can help out.
This is likely due to reshape not supporting dynamic shape but there are
probably several workarounds.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/typeerror-int-argument-must-be-a-string-a-bytes-like-object-or-a-nu
Hmm so there are different levels of representation for the operations in the
model. Relay which is graph level representation, it is like your standard
dataflow graph, e.g. in tensorflow or onnx.
These get lowered down to TIR eventually which is something that looks like
actual code with fo
I was not able to reproduce the error. I am using torch `1.9.0.post2`. Do you
still have an issue?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-reporter-asserteq-data-shape-data-shape-size-1-weight-shape-1-is-false-denserel-input-dimension-doesnt-match-data-shape-1-512-w
Oh you were using windows this whole time???
Yeah I think it might be a little tricky, it has the least amount of testing
and use I believe.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/check-failed-reporter-asserteq-data-shape-data-shape-size-1-weight-shape-1-is-false-denserel-inp
I would suggest looking into converting the model --> onnx --> relay if
possible. The onnx frontend is much more mature.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/issue-converting-model-from-pytorch-to-relay-model/11538/2)
to respond.
You are receiving this because you enabled m
Dropout is usually a training-time only operation, during inference all nodes
will be kept.
As TVM does not support training atm right now the way it works is dropout is
basically removed in any graph.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-about-nn-dropout/11630/2)
IIRC `0` and `-1` are special values whose behavior is similar to onnx:
https://github.com/onnx/onnx/blob/master/docs/Operators.md#Reshape
---
[Visit
Topic](https://discuss.tvm.apache.org/t/the-value-in-the-shape-tuple-can-be-negative/11627/2)
to respond.
You are receiving this because y
I'll take a closer look this week. As for speed, which type of device are you
running this on? Not all targets, most notably x86 CPU has good support for
fp16.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/could-tvm-use-fp16-to-infer/11608/15)
to respond.
You are receiving this bec
https://github.com/AndrewZhaoLuo/TVM-Sandbox/blob/main/relay/graph_debugger_example.py
Here is an example of using graph debugger. My apologies as it isn't very
complete and the tensor dumps you'll have to manually associate.
If you use graph debugger there is also an interesting function
`g
I believe you will need to do this if some inputs in your onnx models don't
have fully defined shapes. E.g. you might have batch norm not defined so in
your onnx model it will be something like shape ['?', 3, 224, 224]. In this
case if you have a fixed shape it probably is helpful otherwise yo
21 matches
Mail list logo