Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Thierry Moreau
Thanks @merrymercy, this is really awesome work. I second Jared's comment on 
work involved in adding a backend. I'd be happy to chat some more about how one 
would add automated compilation to different hardware accelerators including 
VTA.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479366759

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Lianmin Zheng
@jroesch Currently, it is about 500 loc per backend. I am working on 
improvements so it may increase.

@yzhliu 
* simple reduction: reduction ops that do not have reuse opportunity (e.g. 
softmax, argmin)
* complex reduction: reduction ops that have reuse opportunity (e.g. matmul, 
conv2d
* direct compute: broadcast, elemwise, stencil computation, (e.g. relu, add)
* location-tunable compute: the same as above. The difference is that `direct 
compute` computes at root, while `location-tunable compute` can computes at 
other nodes to increase locality.

@tmoreau89 This is doable. The problem of accelerators is that if we want the 
auto-scheduler to take in a hardware-independent description, then we need a 
special pack pass to transform the layout.



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479379983

Re: [dmlc/tvm] [DEV] TVM v0.6 Roadmap (#2623)

2019-04-03 Thread Lianmin Zheng
# TVM Monthly - March 2019
https://discuss.tvm.ai/t/tvm-monthly-march-2019/2083

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-479389976

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Jared Roesch
@merrymercy I'm less interested in LOC and more how much conceptual burden 
there is. What are the key pieces that make up a backend description is more my 
question.

 I looked over the code but was at SysML and have two deadlines this week so I 
haven't had a chance to really look it over. Look forward to landing this stuff.

One idea I've been thinking about is a combined TVM + Relay language where we 
can auto-extract chunks that can be lowered to the compute language, 
auto-schedule, then auto-tune for end-to-end perf.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479393039

[TVM Discuss] [Development] Google lasted work: MLIR Primer

2019-04-03 Thread Ehsan M Kermani via TVM Discuss


Open sourced: https://github.com/tensorflow/mlir





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/13) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/d044b887862a5733f9051797ff11654431d5968f5c93272b4ff0375dd01f60b5).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=Z1Qub1MeV4T-Rj3BhV9F0g2

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Yao Wang
@merrymercy Auto-scheduler will create another search space consists of 
schedule templates. For a given set of hardware parameters, it will try various 
schedule templates and for each template do some auto-tuning on real device. 
This means for each minor device type, we need to do all these steps. Do I 
understand it correctly?

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479679532

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Yizhi Liu
@merrymercy Do you think this analysis design can be easily extended to be 
working based on TVM Tensor AST (HalideIR) instead of ScheduleStage? Not urgent 
but I think eventually we will make schedule primitives work on HalideIR, so 
that we can unify the underlying data structure of schedule and other passes.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479736425

Re: [dmlc/tvm] [RFC][AUTOTVM] Auto-Schedule from Compute Declaration (#2954)

2019-04-03 Thread Tianqi Chen
Good discussions, I think in general we can move to summarize the common 
patterns and make things work for specific hardware backend. As for point 
bought by @yzhliu (unifying schedule with pass), eventually ScheduleStage 
itself(or other IR structure) can be viewed as a dialect of the IR, and we can 
do so after we push for such unification. 

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-479744491