Thanks a lot for working on this, this is going to be really impactful,
especially toward supporting NLP models. I have a couple of questions:
1. Can you please explain the shape function in a little more detail? What
exactly is its purpose? Will it have to be registered for every op?
2. Some op
Overview
-
When a user wants to use AutoTVM to tune a model, she often lets AutoTVM tune
every task extracted from the model sequentially. Assuming each task requires 1
hour or so, tuning a model with 10 to 100+ tasks requires days. This RFC
proposes a lightweight solution to reduce tuni
Thanks for the proposal. One high level comment: ideally we want to keep the
module API minimum, and move transformation-like operations to the transform
namespace :)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://
# Node System Refactor Proposal
This proposal is part of the unified object protocol RFC. The node refers to
the base object structure for constructing AST/IR nodes as well as utilities
manipulating them in the TVM stack.
Historically, the node folder contains a implementation of a smart
point
@soiferj
1. Shape function is used to compute the output shape(s) of an op at runtime,
which cannot be determined at compilation time. And yes, fow now, we have to
register the shape function for all ops to support dynamic shape.
2. We could do this. But we need to change the attribute of `full`
I think if we look at my recent PR we need to probably track the device context
when we allocate storage. The storage's context will prevent merging different
pieces of storage.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitH
@soiferj For ```full``` op, we can change the input shape argument to be
relay.Expr. We use hybrid script to register shape functions, since most of
them are not easy to be written as tensor expression. We only add CPU version
shape functions, and relay on Heterogeneous execution for gpu.
--
Y
@tqchen Sure. Dispatch function doesn't need to couple with relay::Module.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4118#issuecomment-545759382
@jroesch thanks. I have put references to the PR in the RFC.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4178#issuecomment-545771181