Seems we agreed that weak reference was better than gc. close this RFC thread
for now. Thanks everyone who participated in the discussion
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/342
Closed #3423.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3423#event-2464108976
I talked with Zach DeVito from PyTorch team for a while about RefCounting,
there are quite a few benefits to using reference counting. We should probably
just use weak refs, solutions from Counting Immutable Beans (a recent paper by
my MSR collaborator where they do much better than GC languages
@MarisaKirisame, https://github.com/dmlc/tvm/pull/3448 was the leak @ajtulloch
was referring to.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3423#issuecomment-506540526
@ajtulloch indeed. However, it will always be possible that ppl might
uncarefully create strong reference loop, and be unable to collect.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/34
cc @hlu1 re: the leak from e.g. recursive functions taking a reference to
itself in the environment.
FWIW, can we solve these via adding the concept of weak references to the node
system? It seems like in these cases that closures could use weak references to
other closures, and have the higher
Personally, I am not in favor of introducing GC to the system. Reference
counting was fine and is great for handling external resources(GPU memories
etc). Exact for the same reason, languages like java/scala had a pretty bad
time working with GPU memories(memories not being de-allocated precisel
Thanks for bringing up this topic. We might need to consider the scenarios we
want to use relay and design under those constraints. For now the main use
cases is inference(maybe training in the future). I guess most of the objects
allocated during inference will be `TensorObject`, plus a few `Da
right now relay aot/interpreter/vm all use reference counting to collect
memory. However, it is possible to create cyclic data structure in relay, as
demonstrated below:
data Loop = Ref (Optional Loop)
x : Loop = Ref None
match x with
| r -> r := x
end
in short, we can has a data type holding