Closed #2659.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2659#event-2215789315
Yes, I agree that we should just keep the explicit casting.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2659#issuecomment-474697807
# Register VM Design
Current Relay VM RFC (#2810) proposes stack-based VM design which uses push and
pop to maintain a unified stack. Though the design itself is simple to
implement, it is cumbersome in tasks such as dataflow analysis and enforces
certain orders in the execution.
I propose a re
I proposed an alternative VM design using registers instead of stack. We can
discuss and compare which one works better.
cc @jroesch @tqchen @wweic @zhiics @MarisaKirisame @junrushao1994 @abergeron
@ajtulloch @antinucleon @Ravenwater
--
You are receiving this because you are subscribed to thi
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-480992101
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481432475
I propose two changes to instructions.
- `AllocTensor` uses shape stored in the register instead of hardcode shape in
the instruction. This can help support dynamic shape in VM in the future. We
can store constant shapes in the constant lists, and use `LoadConst` to load
them.
- Change `Phi` to
This PR adds @antinucleon to the reviewer list of tvm. He has been contributed
to TVM and Relay runtime.
- [Commits](https://github.com/dmlc/tvm/commits?author=antinucleon)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aantinucleon)
- [Community Engagement](https
# TVM Monthly - May 2019
https://discuss.tvm.ai/t/tvm-monthly-may-2019/2793
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-498106945
As we have pass manager in Relay optimization and mostly use module as
optimization unit, I suggest we change the Relay frontend converter to return a
module instead of a function. The change also makes easier add prelude and
global var in the frontend converter in the future. Frontend converter
I'm working on a PR that separates VM Object from Interpreter value #3391.
After the PR, VM will directly return Object to python instead of converting to
Interpreter Value. Currently I haven't dealt with `ClosureObject` since it
won't appear in the return value.
--
You are receiving this beca
Merged #3412 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3412#event-2431035620
One question I have is whether we should put `length` of *all fields* in the
beginning, or just put the `length` of *one array field* `val*` immediately
before this field. It's possible that we may have an instruction with more than
one field that are arrays in the future. However, the current d
@zhiics I mean if there are two fields with variable length, though not very
likely, what do you plan to support? Also do we expect every fields have same
data type? Another point Marisa is making that we don't need to put `length`
for every instructions since many have fixed length.
--
You ar
This PR adds @slyubomirsky to the reviewer list of tvm. He has been
contributing to many core features in Relay.
- [Commits](https://github.com/dmlc/tvm/commits?author=slyubomirsky)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aslyubomirsky)
- [Community Engage
This PR adds @MarisaKirisame to the reviewer list of TVM. He has been
contributing to Relay IR, passes, and ADT.
- [Commits](https://github.com/dmlc/tvm/commits?author=MarisaKirisame)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3AMarisaKirisame)
- [Community En
# TVM Monthly - July 2019
https://discuss.tvm.ai/t/tvm-monthly-july-2019
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-520475436
This PR adds Luis Vega (@vegaluisjose) to the reviewer list of TVM. He has been
contributing to VTA and TSIM.
- [Commits](https://github.com/dmlc/tvm/commits?author=vegaluisjose)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Avegaluisjose)
- [Community Engagemen
Merged #3997 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3997#event-2660143912
Merged #4043 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/4043#event-2678553862
# TVM Monthly - September 2019
https://discuss.tvm.ai/t/tvm-monthly-september-2019
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-538074210
I have one question about `_type_child_slots`. If the child class is a base
class for others and also defines `_type_child_slots`, will you check if it
overflows its parent `_type_child_slots`?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544583390
@soiferj
1. Shape function is used to compute the output shape(s) of an op at runtime,
which cannot be determined at compilation time. And yes, fow now, we have to
register the shape function for all ops to support dynamic shape.
2. We could do this. But we need to change the attribute of `full`
@mbarrett97 I wonder why not just using the transfer learning in the AutoTVM.
After using transfer learning, AutoTVM will skip the tasks that have been tried
before. See the example at
https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#begin-tuning
--
You are receiving this because you a
This PR welcomes @weberlo as a new reviewer of the TVM. He contributed to uTVM
and Relay ADT.
- [Commits](https://github.com/dmlc/tvm/commits?author=weberlo)
-
[Reviews](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Aweberlo)
- [Community forum engagement](https://discuss.tvm.a
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-559610219
Yes, I agree we should have a general `AssertExpr` instead of
`AssertLowerBound`, and put the asserted condition in the expr..
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/44
Merged #4636 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/4636#event-2925568847
Please join us to welcome Marisa Kirisame(@MarisaKirisame) as a committer of
TVM. Marisa has strong background and knowledge in programming language and
compiler. He contributed to many aspects to relay and TVM, including high-level
relay IR, partial evaluator, high-order/first-order AD pass, AD
Merged #4645 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/4645#event-2929677304
Not for release purpose. This is only for convenience purposes to differentiate
TVM with and w/o op strategy.
cc @tqchen @comaniac
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/4941
-- Commit Summary --
* bump up dev version
@comaniac Thanks for the reminder. Updated now.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/4941#issuecomment-591171149
Merged #5064 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5064#event-3128926179
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5102#issuecomment-601398752
Merged #5790 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5790#event-3439973095
Merged #5817 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5817#event-3445991608
Please join us to welcome @mbrookhart as a new reviewer of the TVM community.
He has been actively contributing to non-recursive graph visitor, Relay pattern
language and matcher, and ONNX frontend conversions.
- [Commits](https://github.com/apache/incubator-tvm/commits?author=mbrookhart)
- [Cod
Merged #6290 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6290#event-3663875063
+1
TVM community gets good traction. super excited to be part of the community.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676830190
+1 (binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679411683
Ack
On Thu, Aug 27, 2020 at 4:38 PM Thierry Moreau
wrote:
> Ack
>
>
>
> > On Aug 27, 2020, at 4:36 PM, YiZhi Liu wrote:
>
> >
>
> > Ack
>
> >
>
> > On Thu, Aug 27, 2020 at 4:09 PM Tianqi Chen
>
> > wrote:
>
> >
>
> >> Ack:)
>
> >>
>
> >> TQ
>
> >>
>
> >> On Thu, Aug 27, 2020 at 4:05 PM Byung-G
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7991#issuecomment-833926694
Halide doc provides the definition of "intrinsic" and "extern" functions:
https://halide-lang.org/docs/struct_halide_1_1_internal_1_1_call.html#a45d847325694df85e74150f770c1e393
"pure" just means that this function is side-effect-free.
---
[Visit
Topic](https://discuss.tvm.ai/t/should-we-u
Yes, that sounds good to me.
---
[Visit
Topic](https://discuss.tvm.ai/t/performing-relay-passes-non-recursively/5696/21)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/07c8
Currently Relay `gather_nd` op uses the [mxnet
semantic](https://mxnet.apache.org/api/python/docs/api/ndarray/op/index.html#mxnet.ndarray.op.gather_nd),
which each column in `indices` indicates the indices in `data`. However,
Tensorflow
[`gather_nd`](https://www.tensorflow.org/api_docs/python
I stand with Tianqi on the `target_host` attribute as it encapsulates the
information required to compile for a device and can simplify the
transformation passes in the TVM stack. I have a few questions to the new
target specification.
1. How will the generic function and dispatching works wi
Keys are an important field in the target to make other modules work. Since the
target can be created from json, I'm worried if people forget to add certain
keys in the target, it might cause some undesired behavior.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-tvm-target-specification/6
Agree with @junrushao1994. I think we should use fp32 as default instead of
fp64 as it's more common in deep learning.
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-the-meaning-of-float-in-relay/6949/5)
to respond.
You are receiving this because you enabled mailing list mode.
To un
I'm also in favor of A1 approach. I have one more question to dynamic ops.
Currently Relay allows to use symbolic var to represent a dimension. In the
world of A1, if attributes contains a symbolic var, such as new shape in
`reshape`, are we treating the op as a dynamic op or static op?
-
I agree with @tqchen. Probably we should wait and see how Ansor log looks like
and include it into the design. We could have @merrymercy comment on this.
In the high level, I suggest we have five fields: **target**, workload, config,
results, version. The only change is taking the target out o
Probably we can canonicalize the target (e.g., a protobuf buffer) instead of a
string as well. We can refer the target format to
https://discuss.tvm.ai/t/rfc-tvm-target-specification/6844. @tqchen
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/9)
to
You can follow the example in vta to overwrite the implementation for a
specific target.
https://github.com/apache/incubator-tvm/blob/master/vta/python/vta/top/op.py#L63
---
[Visit Topic](https://discuss.tvm.ai/t/relay-op-strategy/5247/21) to respond.
You are receiving this because you en
Yes, @tqchen and I will post a RFC soon for the binary distribution.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/make-binary-distributation/7867/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://dis
So far we only released source code in past tvm releases, and we will continue
to do so.
As we continue to develop tvm, we also see some demand for convenient binary
packages, such as wheel or docker binaries. One important factor of such binary
packages is the potential links with third part
The wheels are built on a newer version of CentOS. Pip wheel for CPU is
manylinux2010 compatibility and wheels for CUDA are manylinux2014
compatibility. Releasing wheels with different CUDA versions is to accommodate
different develop and deploy environment.
---
[Visit
Topic](https://dis
Sure, I think you can add this into the `SimplifyExpr` pass. I thought about
this before, but I think that the reshape op might be inlined in the fused op.
---
[Visit Topic](https://discuss.tvm.apache.org/t/remove-extra-reshape/8212/2) to
respond.
You are receiving this because you enable
I have one question about `use_topi_schedule`. I assume that after we set it to
False, it will always use the Ansor scheduler to schedule the ops. Will there
be a case that we want have a mix of topi schedule and ansor schedule?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-a-gen
I propose to rename the `TVMContext` to `TVMDevice`. Currently, `TVMContext` is
used to represent a device that the model is executed on. Two main reasons for
this change:
1. the name of `TVMContext` doesn't intuitively reflect its semantics of a
device.
2. mainstream frameworks including
[
1. TVM uses
[RPCSession](https://tvm.apache.org/docs/api/python/rpc.html#tvm.rpc.RPCSession.context)
to create a remote context/device. We can also change this API to `device`.
2. In fact, I also plan to create a RFC to dlpack for the context name change.
---
[Visit
Topic](https://discuss
I don't know much about µTVM. Need to take a look to comment on this.
I think we can make the change in TVM first and change again after the change
is pushed to DLPack. Because there'll be more dependencies in DLPack, it'll
probably take longer time to change in DLPack.
You can also check out
I think we can keep `TVMDevice` in the C++ and backend, but use `tvm.device` in
the python frontend. It can reduce the confusion when integrating TVM into
other frameworks if we keep `TVM` prefix.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/1
Since everyone agree on A0, I'll go ahead and prepare the PR in next week.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/27)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
he
As per discussion, I will prepare the PR to change the API to `numpy` and add a
deprecation warning message in the `asnumpy`.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-api-change-asnumpy-numpy/9846/9) to
respond.
You are receiving this because you enabled mailing list mode.
64 matches
Mail list logo