Should we open an RFC to discuss how to port autotvm and topi to c++?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2685#issuecomment-478954829
# Auto-Scheduler
TVM decouples kernel implementation into compute and schedule. The compute part
is a friendly DSL that can describe algorithms intuitively. However, the
schedule part still requires strong expert knowledge and time-consuming tuning
to provide decent performance. The tuning proce
@jroesch Currently, it is about 500 loc per backend. I am working on
improvements so it may increase.
@yzhliu
* simple reduction: reduction ops that do not have reuse opportunity (e.g.
softmax, argmin)
* complex reduction: reduction ops that have reuse opportunity (e.g. matmul,
conv2d
* direct
# TVM Monthly - March 2019
https://discuss.tvm.ai/t/tvm-monthly-march-2019/2083
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-479389976
@jroesch There is no easy description for a backend. Currently these
meta-templates are mainly based on the summary of existing human schedule code
in TOPI. So adding a new backend is still hard. What can be reused is the
classification of compute type.
@kevinthesun There is only one template f
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-481145860
@eqy "injective" is considered "direct compute". Typically they will be inlined.
Serializable Template + Serializable Config seems to be a good direction to go.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.
Merged #3214 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3214#event-2355284663
This PR adds Josh Pollock (@joshpoll) to the reviewer list of tvm. He has been
contributed to relay text format and parser.
- [Commits](https://github.com/dmlc/tvm/commits?author=joshpoll)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Ajoshpoll)
- [Forum Engage
@Lyken17 https://github.com/dmlc/tvm/issues/1585
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-519520425
This PR adds Balint Cristian (@cbalint13 ) to the reviewer list of TVM. He has
been contributed to winograd schedule, autotvm, relay and ONNX frontend.
- [Commits](https://github.com/dmlc/tvm/commits?author=cbalint13)
- [Code
Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by
Merged #3935 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3935#event-2627226807
Hi @yangjunpro @hello-hzb ,
This project is suspended for several months. I won't continue my work on the
original branch.
However, the push for an auto-scheduler is still interesting to a lot of
people, I might work on auto-scheduler again with some Berkeley students. We'd
like to try different
Closed #4105.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4105#event-2825758382
Merged #4719 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/4719#event-2954452292
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5947#issuecomment-651300467
+1 I support this
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15521#issuecomment-1689294442
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1882046532
You are receiving this because you are subscribed to this thread.
Message ID:
@yzhliu You are right. At that time, we thought `AlterOpLayout` does not have
dependency problem and can be done in a single forward pass, so we tried to do
a lot of things in a single pass, which includes operator substitution, layout
inference, and layout-transformation insertion. I agree t
# Motivation
The current autotvm requires pre-defined schedule templates. This makes autotvm
only semi-automated: the search is automated, but the search space has to be
manually defined by developers using the schedule templates. This approach has
several drawbacks:
1. The templates are har
Thanks for the discussion. Here are my thoughts.
### API Usage
The API for tuning a whole neural network will be the same as autotvm
(extract tasks and tune all of them).
The API for writing templates is still under development. But it will be
similar to autotvm.
### Performance in absolu
## Difference between the logs for Ansor and AutoTVM
There are two major differences between ansor's log and autotvm's log
1. The workload for Ansor is a subgraph defined by multiple `tvm.compute`,
while the workload for autotvm is a single operator.
To index log quickly, Ansor stores a hash
How is the compilation speed compared to the original TE?
In Ansor/Autotvm, we have to compile a lot of schedules for feature extraction,
so the speed of schedule transformation matters.
Do you have any benchmark results? Intuitively, I think the original TE will be
faster because it can do a
@jcf94 @junrushao1994 Sorry, both of you don't understand my question correctly.
I mean the original TE is a declarative language so it can know all
transformation before it starts to generate low-level AST. But the new schedule
primitives are done imperatively. In the original TE, we can shar
## Motivation
Currently, TVM lacks an up-to-date and reproducible benchmark. The only
benchmark is hosted at
[tvm/apps/benchmark](https://github.com/apache/incubator-tvm/tree/main/apps/benchmark).
However, this benchmark is too old and has several flaws.
1. The results were obtained 2 years ag
25 matches
Mail list logo