Yeah, I forgot to mention
[this](https://docs.tvm.ai/vta/install.html#vta-simulator-installation) glad
that you figured out.
---
[Visit
Topic](https://discuss.tvm.ai/t/getting-started-with-the-vta-chisel-backend/2987/11)
to respond.
You are receiving this because you enabled mailing lis
right now relay aot/interpreter/vm all use reference counting to collect
memory. However, it is possible to create cyclic data structure in relay, as
demonstrated below:
data Loop = Ref (Optional Loop)
x : Loop = Ref None
match x with
| r -> r := x
end
in short, we can has a data type holding
Thanks for bringing up this topic. We might need to consider the scenarios we
want to use relay and design under those constraints. For now the main use
cases is inference(maybe training in the future). I guess most of the objects
allocated during inference will be `TensorObject`, plus a few `Da
Personally, I am not in favor of introducing GC to the system. Reference
counting was fine and is great for handling external resources(GPU memories
etc). Exact for the same reason, languages like java/scala had a pretty bad
time working with GPU memories(memories not being de-allocated precisel