I explored @icemelon9 's register vm design in my branch:
https://github.com/wweic/tvm/commits/relay-rts. Would like to share some data
points.
We need to add special registers in VM for function arguments and return value.
Return register is necessary because when callee function returns, its
## Summary
@tqchen @icemelon9 @jroesch @zhiics @yongwww we discuss in person. Reached the
following consensus:
1. Remove `Phi` instruction. Instead extend `If` to write the result to a new
register.
2. Reuse the existing value stack as the register file. Have an anchor in the
function frame to
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480426790
`RefRead` and `RefRead` are swapped in the spec.
For `@foo(%a, %b, BarAttrs={%c: None, %d: None})`, does it mean Relay supports
record? Or Relay core recognizes the record like syntax and maintains the
record internally, source code can not access the record?
--
You are receiving this because
@MarisaKirisame Thanks for clarification. Just curious about the implementation
details. I don't have a use case right now.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3016#issuecommen
@yangjunpro +1 on your work. As Relay is also planning to support dynamic
shape(https://github.com/dmlc/tvm/issues/3042), we might not need to directly
handle step 5(relay does the JIT/bucketing under the hood). We are also
thinking that is it reasonable to do the opposite, the main runtime is t
I'm good with the RFC overall. Slightly prefer `!` same as @MarisaKirisame.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3016#issuecomment-490325592
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3346#issuecomment-501077110
Thanks for bringing up this topic. We might need to consider the scenarios we
want to use relay and design under those constraints. For now the main use
cases is inference(maybe training in the future). I guess most of the objects
allocated during inference will be `TensorObject`, plus a few `Da
I also prefer floordiv as its use in MLIR.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3478#issuecomment-509013801
Could you also comment on how to encode heterogeneous execution information?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3594#issuecomment-515497828
I think it's a good idea to add a pass before aot/vm compilation, so we can
apply pattern matching compilation optimization techniques only once, aot/vm
will just be a code generation pass. `is_XX`, `unpack_XX` sounds reasonable.
These two functions will likely use pattern matching internally,
@MarisaKirisame Yes, this is what the decompilation pass will do. But
eventually interpreter/aot/vm has to deal with `is_X`/`unpack_S` function
calls. There are 2 ways to handle them. One way is to change relay core to
handle `is_*` and `unpack_*` specially. Another way is to dynamically generat
We might also need to support `Any` https://github.com/dmlc/tvm/issues/3042.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3016#issuecomment-532348178
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544313737
# Heterogeneous execution in Relay VM
## Goal
Relay graph runtime supports executing different parts of the graph in various
devices, namely heterogeneous execution. We’d like to port the feature to Relay
VM.
## Non-goals
There is a limitation of device annotation pass that it assumes all th
@jroesch thanks. I have put references to the PR in the RFC.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4178#issuecomment-545771181
Hey, there is a similar RFC on the
topic(https://github.com/apache/incubator-tvm/issues/4449).
cc @gussmith23 @zhiics @jroesch @MarisaKirisame @icemelon9 @slyubomirsky
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https:
If we can get `Any`(https://github.com/dmlc/tvm/issues/3042) merged, I think we
can support TensorArray as follows:
type dynamic_tensor =
Tensor0 of TensorType(shape=())
| Tensor1 of TensorType(shape=(Any))
| Tensor2 of TensorType(shape=(Any, Any))
| Tensor3 of TensorType(shape=(A
@ydy Any is not complete yet. Right now we are able to represent model with
dynamic shape in relay. We still need to finish the codegen and runtime change
in order to execute the model.
---
[Visit Topic](https://discuss.tvm.ai/t/how-to-support-tf-tensorarray/1983/7) to
respond.
You are r
20 matches
Mail list logo