Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Thierry Moreau
Thanks @merrymercy, this is really awesome work. I second Jared's comment on work involved in adding a backend. I'd be happy to chat some more about how one would add automated compilation to different hardware accelerators including VTA. -- You are receiving this because you are subscribed to

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Lianmin Zheng
@jroesch Currently, it is about 500 loc per backend. I am working on improvements so it may increase. @yzhliu * simple reduction: reduction ops that do not have reuse opportunity (e.g. softmax, argmin) * complex reduction: reduction ops that have reuse opportunity (e.g. matmul, conv2d * direct

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-04-03 Thread Lianmin Zheng
# TVM Monthly - March 2019 https://discuss.tvm.ai/t/tvm-monthly-march-2019/2083 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2623#issuecomment-479389976

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Jared Roesch
@merrymercy I'm less interested in LOC and more how much conceptual burden there is. What are the key pieces that make up a backend description is more my question. I looked over the code but was at SysML and have two deadlines this week so I haven't had a chance to really look it over. Look f

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-03 Thread Ehsan M Kermani via TVM Discuss
Open sourced: https://github.com/tensorflow/mlir --- [Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/13) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubs

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Yao Wang
@merrymercy Auto-scheduler will create another search space consists of schedule templates. For a given set of hardware parameters, it will try various schedule templates and for each template do some auto-tuning on real device. This means for each minor device type, we need to do all these step

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Yizhi Liu
@merrymercy Do you think this analysis design can be easily extended to be working based on TVM Tensor AST (HalideIR) instead of ScheduleStage? Not urgent but I think eventually we will make schedule primitives work on HalideIR, so that we can unify the underlying data structure of schedule and

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Tianqi Chen
Good discussions, I think in general we can move to summarize the common patterns and make things work for specific hardware backend. As for point bought by @yzhliu (unifying schedule with pass), eventually ScheduleStage itself(or other IR structure) can be viewed as a dialect of the IR, and we