[quote="ANSHUMAN.TRIPATHY, post:10, topic:6213"]
I have proposal for 2 APIs as below:
1. –> NDArray().Detach() --> Once Get output from TVM runtime, and keep it only
for read only purpose.
2. –> NDArray().Attach() --> When User wants to Feedback the same NDArray into
the Runtime.
Please share
cc @yzhliu @haichen @jroesch @ajtulloch @liangfu @jroesch @thierry
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
he
Merged #5460 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5460#event-3283017557
tvm.lower python api, you need to give the schedule and input/output symbol.
```
print(tvm.lower(s, [data, valid_count, out], name="test_nms"))
```
---
[Visit
Topic](https://discuss.tvm.ai/t/vta-a-workaround-for-deploying-faster-r-cnn-on-target-ext-dev-vta-and-arm-cpu/6516/5)
to respond.
**Motivation**
We would like to move towards a world where there is a clear attempt to try and
start becoming more predictable with release cycles and what the usage of a
release is going to be . As part of this ,releases need regression fixes.
However, if the community is making releases, th
How is the “ lower schedule” printed out?
---
[Visit
Topic](https://discuss.tvm.ai/t/vta-a-workaround-for-deploying-faster-r-cnn-on-target-ext-dev-vta-and-arm-cpu/6516/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
h
At present, there is few problem with quantization. The following work is to
modify the graph pack function to transform most convolutions into NCHW1n16c to
get accelerating. I need to add some op names to complete AST traverse in graph
pack function. If there is a mistake, please correct me.
Thanks for starting the topic. I think one thing we do need to do is to reuse
existing cpu autotvm templates and possibly tune for wasm.
The lack of dlopen in wasm is not going to go away for a while due to the
special programming model. We recently have some rough idea to get around it
and w
The pointer should be 64 bits on my virtual machine. After correcting it, I
have deployed the Faster R-CNN on VTA. Actually solving this problem, the
Faster R-CNN can be supported by VTA.
```
auto data_ptr_tmp = static_cast(input->data);
auto data_ptr = reinterpret_cast(*data_ptr_tmp);
au
Sorry. Actually, I may have been mistaken on the use of the isView after a
closer look at the Java impl and the test code here:
https://github.com/apache/incubator-tvm/blob/1dcf8a16ee3a93dff5ffc1ad1a66892eda03ef13/jvm/core/src/test/java/org/apache/tvm/contrib/GraphRuntimeTest.java
---
[Vis
Others may confirm if this is correct, but I believe they implement this in the
Java version by keeping track if the managed NDArray is the owner of the native
handle, using that to determine if it can release it.
https://github.com/apache/incubator-tvm/blob/1dcf8a16ee3a93dff5ffc1ad1a66892eda0
@tqchen: I seek your expert advice for below point. Please help!
This is regarding NDArray Design.
As in this feature i am trying to bridge between TVM.NET runtime(Managed
Memory) and TVM C runtime(Unmanaged Memory) .
Below is a snapshot of NDArray Class Design which i have currently.
![imag
12 matches
Mail list logo