Re: [dmlc/tvm] [RFC] Relay C++ Frontend (#2685)

2019-04-02 Thread Lianmin Zheng
Should we open an RFC to discuss how to port autotvm and topi to c++? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2685#issuecomment-478954829

[dmlc/tvm] [RFC][AUTOTVM] Auto-Scheduler from Compute Decleration (#2954)

2019-04-02 Thread Lianmin Zheng
# Auto-Scheduler TVM decouples kernel implementation into compute and schedule. The compute part is a friendly DSL that can describe algorithms intuitively. However, the schedule part still requires strong expert knowledge and time-consuming tuning to provide decent performance. The tuning proce

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Lianmin Zheng
@jroesch Currently, it is about 500 loc per backend. I am working on improvements so it may increase. @yzhliu * simple reduction: reduction ops that do not have reuse opportunity (e.g. softmax, argmin) * complex reduction: reduction ops that have reuse opportunity (e.g. matmul, conv2d * direct

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-04-03 Thread Lianmin Zheng
# TVM Monthly - March 2019 https://discuss.tvm.ai/t/tvm-monthly-march-2019/2083 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-479389976

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-04 Thread Lianmin Zheng
@jroesch There is no easy description for a backend. Currently these meta-templates are mainly based on the summary of existing human schedule code in TOPI. So adding a new backend is still hard. What can be reused is the classification of compute type. @kevinthesun There is only one template f

Re: [dmlc/tvm] [VOTE] Apache Transition Plan (#2973)

2019-04-09 Thread Lianmin Zheng
+1 -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2973#issuecomment-481145860

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-09 Thread Lianmin Zheng
@eqy "injective" is considered "direct compute". Typically they will be inlined. Serializable Template + Serializable Config seems to be a good direction to go. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.

Re: [dmlc/tvm] [Community] @antinucleon -> Reviewer (#3214)

2019-05-21 Thread Lianmin Zheng
Merged #3214 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3214#event-2355284663

[dmlc/tvm] [Community] @joshpoll -> Reviewer (#3412)

2019-06-21 Thread Lianmin Zheng
This PR adds Josh Pollock (@joshpoll) to the reviewer list of tvm. He has been contributed to relay text format and parser. - [Commits](https://github.com/dmlc/tvm/commits?author=joshpoll) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by%3Ajoshpoll) - [Forum Engage

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-08-08 Thread Lianmin Zheng
@Lyken17 https://github.com/dmlc/tvm/issues/1585 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-519520425

[dmlc/tvm] [Community] Add reviewer Balint Cristian (#3935)

2019-09-11 Thread Lianmin Zheng
This PR adds Balint Cristian (@cbalint13 ) to the reviewer list of TVM. He has been contributed to winograd schedule, autotvm, relay and ONNX frontend. - [Commits](https://github.com/dmlc/tvm/commits?author=cbalint13) - [Code Review](https://github.com/dmlc/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by

Re: [dmlc/tvm] [Community] Add reviewer Balint Cristian (#3935)

2019-09-11 Thread Lianmin Zheng
Merged #3935 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/pull/3935#event-2627226807

Re: [apache/incubator-tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-11-13 Thread Lianmin Zheng
Hi @yangjunpro @hello-hzb , This project is suspended for several months. I won't continue my work on the original branch. However, the push for an auto-scheduler is still interesting to a lot of people, I might work on auto-scheduler again with some Berkeley students. We'd like to try different

Re: [apache/incubator-tvm] [RFC] Auto TensorCore CodeGen (#4105)

2019-11-23 Thread Lianmin Zheng
Closed #4105. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/4105#event-2825758382

Re: [apache/incubator-tvm] [COMMUNITY] @FrozenGene -> committer (#4719)

2020-01-16 Thread Lianmin Zheng
Merged #4719 into master. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/pull/4719#event-2954452292

Re: [apache/incubator-tvm] [VOTE] Release Apache TVM (incubating) v0.6.1.rc1 (#5947)

2020-06-29 Thread Lianmin Zheng
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/incubator-tvm/issues/5947#issuecomment-651300467

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-22 Thread Lianmin Zheng
+1 I support this -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15521#issuecomment-1689294442 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-08 Thread Lianmin Zheng
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1882046532 You are receiving this because you are subscribed to this thread. Message ID:

[TVM Discuss] [Development] [DISCUSS] and Documentation of AlterLayout Pass

2019-03-20 Thread Lianmin Zheng via TVM Discuss
@yzhliu You are right. At that time, we thought `AlterOpLayout` does not have dependency problem and can be done in a single forward pass, so we tried to do a lot of things in a single pass, which includes operator substitution, layout inference, and layout-transformation insertion. I agree t

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-17 Thread Lianmin Zheng via TVM Discuss
# Motivation The current autotvm requires pre-defined schedule templates. This makes autotvm only semi-automated: the search is automated, but the search space has to be manually defined by developers using the schedule templates. This approach has several drawbacks: 1. The templates are har

[TVM Discuss] [Development/RFC] [RFC] Ansor: An Auto-scheduler for TVM (AutoTVM v2.0)

2020-06-18 Thread Lianmin Zheng via TVM Discuss
Thanks for the discussion. Here are my thoughts. ### API Usage The API for tuning a whole neural network will be the same as autotvm (extract tasks and tune all of them). The API for writing templates is still under development. But it will be similar to autotvm. ### Performance in absolu

[TVM Discuss] [RFC] Canonicalizing AutoTVM Log Format

2020-06-25 Thread Lianmin Zheng via TVM Discuss
## Difference between the logs for Ansor and AutoTVM There are two major differences between ansor's log and autotvm's log 1. The workload for Ansor is a subgraph defined by multiple `tvm.compute`, while the workload for autotvm is a single operator. To index log quickly, Ansor stores a hash

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-21 Thread Lianmin Zheng via Apache TVM Discuss
How is the compilation speed compared to the original TE? In Ansor/Autotvm, we have to compile a lot of schedules for feature extraction, so the speed of schedule transformation matters. Do you have any benchmark results? Intuitively, I think the original TE will be faster because it can do a

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-22 Thread Lianmin Zheng via Apache TVM Discuss
@jcf94 @junrushao1994 Sorry, both of you don't understand my question correctly. I mean the original TE is a declarative language so it can know all transformation before it starts to generate low-level AST. But the new schedule primitives are done imperatively. In the original TE, we can shar

[Apache TVM Discuss] [Development/RFC] [RFC] Building a new reproducible benchmark for TVM

2020-11-20 Thread Lianmin Zheng via Apache TVM Discuss
## Motivation Currently, TVM lacks an up-to-date and reproducible benchmark. The only benchmark is hosted at [tvm/apps/benchmark](https://github.com/apache/incubator-tvm/tree/main/apps/benchmark). However, this benchmark is too old and has several flaws. 1. The results were obtained 2 years ag