[TVM Discuss] [Development] Getting started with the VTA Chisel backend

2019-06-23 Thread Luis Vega via TVM Discuss


Yeah, I forgot to mention 
[this](https://docs.tvm.ai/vta/install.html#vta-simulator-installation) glad 
that you figured out.





---
[Visit 
Topic](https://discuss.tvm.ai/t/getting-started-with-the-vta-chisel-backend/2987/11)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/145486788c722503e8ac5ba6ddf7dffe61f0e54eb857ec334d1f5f27472ada21).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=Hdc-K5YaW1yoS-kDeV0f1g2

[dmlc/tvm] [Relay][RFC] Garbage Collection (#3423)

2019-06-23 Thread 雾雨魔理沙
right now relay aot/interpreter/vm all use reference counting to collect 
memory. However, it is possible to create cyclic data structure in relay, as 
demonstrated below:

data Loop = Ref (Optional Loop)
x : Loop = Ref None
match x with
|  r -> r := x
end

in short, we can has a data type holding a mutable (nullable) reference, 
initialize it to null, then point it to itself.

This example is purely contrived, but it is completely possible in real, 
meaningful relay code: imagine a doubly linked list, or two closure referencing 
each other. They all form cyclic link data which will never be collected by 
reference counting.

there are three problem we should discuss:

0: what algorithm should we use? mark and sweep/generational, etc?

1: should we still use reference counting?

2: how can we implement the runtime only once, for aot/interpreter/vm, instead 
of each of them having to implement the runtime itself?

maybe we can look at https://github.com/hsutter/gcpp?

@jroesch @tqchen @icemelon9 @wweic @junrushao1994 @nhynes any suggestion?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3423

Re: [dmlc/tvm] [Relay][RFC] Garbage Collection (#3423)

2019-06-23 Thread Wei Chen
Thanks for bringing up this topic. We might need to consider the scenarios we 
want to use relay and design under those constraints. For now the main use 
cases is inference(maybe training in the future). I guess most of the objects 
allocated during inference will be `TensorObject`, plus a few `DatatypeObject`, 
we can run some benchmarks for major models to confirm. `TensorObject` 
shouldn't cause cyclic reference, and if it's the majority of the objects, we 
can handle them with existing reference counting. And for the remaining 
`DatatypeObject`, a mark sweep GC can be called infrequently. But **benchmark 
first**. :-)

Agree sharing GC. For AOT, should we learn how rust get rid of GC as a further 
optimization? 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3423#issuecomment-504807018

Re: [dmlc/tvm] [Relay][RFC] Garbage Collection (#3423)

2019-06-23 Thread Tianqi Chen
Personally, I am not in favor of introducing GC to the system. Reference 
counting was fine and is great for handling external resources(GPU memories 
etc). Exact for the same reason, languages like java/scala had a pretty bad 
time working with GPU memories(memories not being de-allocated precisely at the 
point where data structure goes out of scope).

Ref counting has its limitation, but we can be mindful to avoid cycles and it 
works great for most of the cases we are looking at, without introducing the 
additional problem of GC

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3423#issuecomment-504839026