Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Jon Soifer
Thanks a lot for working on this, this is going to be really impactful, especially toward supporting NLP models. I have a couple of questions: 1. Can you please explain the shape function in a little more detail? What exactly is its purpose? Will it have to be registered for every op? 2. Some op

[dmlc/tvm] [RFC][AutoTVM] Selective Tuning (#4188)

2019-10-23 Thread Cody Hao Yu
Overview - When a user wants to use AutoTVM to tune a model, she often lets AutoTVM tune every task extracted from the model sequentially. Assuming each task requires 1 hour or so, tuning a model with 10 to 100+ tasks requires days. This RFC proposes a lightweight solution to reduce tuni

Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Tianqi Chen
Thanks for the proposal. One high level comment: ideally we want to keep the module API minimum, and move transformation-like operations to the transform namespace :) -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://

Re: [dmlc/tvm] [RFC] Unifying Object Protocol in the Stack (#4116)

2019-10-23 Thread Tianqi Chen
# Node System Refactor Proposal This proposal is part of the unified object protocol RFC. The node refers to the base object structure for constructing AST/IR nodes as well as utilities manipulating them in the TVM stack. Historically, the node folder contains a implementation of a smart point

Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Haichen Shen
@soiferj 1. Shape function is used to compute the output shape(s) of an op at runtime, which cannot be determined at compilation time. And yes, fow now, we have to register the shape function for all ops to support dynamic shape. 2. We could do this. But we need to change the attribute of `full`

Re: [dmlc/tvm] [RFC][VM] Heterogeneous execution in Relay VM (#4178)

2019-10-23 Thread Jared Roesch
I think if we look at my recent PR we need to probably track the device context when we allocate storage. The storage's context will prevent merging different pieces of storage. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitH

Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Yao Wang
@soiferj For ```full``` op, we can change the input shape argument to be relay.Expr. We use hybrid script to register shape functions, since most of them are not easy to be written as tensor expression. We only add CPU version shape functions, and relay on Heterogeneous execution for gpu. -- Y

Re: [dmlc/tvm] [RFC] Dynamic Shape Support - Graph Dispatching (#4118)

2019-10-23 Thread Yao Wang
@tqchen Sure. Dispatch function doesn't need to couple with relay::Module. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4118#issuecomment-545759382

Re: [dmlc/tvm] [RFC][VM] Heterogeneous execution in Relay VM (#4178)

2019-10-23 Thread Wei Chen
@jroesch thanks. I have put references to the PR in the RFC. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/4178#issuecomment-545771181