Re: [dmlc/tvm] [RFC] Frontend layout transformation (#2519)

2019-04-17 Thread Zhao Wu
@tqchen I plan to support TFLite NHWC data layout after my quantization part upstreamed. However, NCHW has its advantages as described. We could have two options: - Keep NCHW of TFLite and add one parameter named `layout` in `from_tflite`. `layout` could be `NCHW` or `NHWC`. The default value

Re: [dmlc/tvm] [RFC] Frontend layout transformation (#2519)

2019-04-17 Thread Siva
I vote for using tflite original layout as it is. The internal conversation logic make it complex while adding new features. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2519#issuecommen

[TVM Discuss] [Development] Disable assert in runtime

2019-04-17 Thread Harouwu via TVM Discuss
This would be a really useful feature! Looking forward to it! --- [Visit Topic](https://discuss.tvm.ai/t/disable-assert-in-runtime/2152/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email

[dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Tianqi Chen
Recently, there are a few problems arise wrt to the fusor and tuples. The main problem is the incompatibility of the calling convention when dealing with tuples. - At the lowest level, the primitive function is not aware of the tuple, and we simply flatten everything including inputs and output

Re: [dmlc/tvm] [Relay][RFC] FuseOps and Tuple (#2931)

2019-04-17 Thread Tianqi Chen
https://github.com/dmlc/tvm/issues/3039 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2931#issuecomment-484220471

Re: [dmlc/tvm] [Relay][RFC] FuseOps and Tuple (#2931)

2019-04-17 Thread Tianqi Chen
Closed #2931. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2931#event-2283704704

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread 雾雨魔理沙
@tqchen where is %2? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3039#issuecomment-484229709

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
@junrushao1994 > @tqchen where is %2? There might be some code emitted, but the idea is to the problem when dealing with duplicate values in return tuples. > why is the example bad for codegen The output tensor is scheduled twice in compute_engine here: https://github.com/dmlc/tvm/blob/552d4a

Re: [dmlc/tvm] [RFC][Relay] Text Format Part 2 (#3016)

2019-04-17 Thread Josh Pollock
@tqchen could you provide examples of the `attrs=ValueFormat` suggestion? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3016#issuecomment-484319295

[dmlc/tvm] [RFC][Relay] Dynamic Dimensions (#3042)

2019-04-17 Thread Jared Roesch
# Supporting Dynamic Dimensions I recently opened an RFC proposing a new dynamic runtime (see #2810). A missing piece of the puzzle for supporting fully dynamic models is typing, and code generation for tensors with statically unknown shapes. There are three critical steps to supporting dynami

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread masahi
We can update TOPI to prevent scheduling the same op twice. It is partially done in #1556 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3039#issuecomment-484324430

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
@masahi Can we prevent from passing duplicated tensors instead? It looks that we otherwise need to change all schedules for all targets in topi, right? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread masahi
@zhiics I don't know a solution for your problem without breaking other parts (like breaking calling convention as you and @vinx13 said). I'll trying fixing the fuser so that a tuple like (%0, %0) won't happen. -- You are receiving this because you are subscribed to this thread. Reply to this

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread masahi
It seems #2412 is also related to this issue. @srkreddy1238 [mentioned](https://github.com/dmlc/tvm/pull/2412#issuecomment-453912379) that enabling fusion caused a LLVM error. When fusion support for tuple was added in #2187, I handled the case where * the tuple is root of its group * the tuple

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread masahi
@zhiics can you replace [this block](https://github.com/dmlc/tvm/blob/master/src/relay/pass/fuse_ops.cc#L822-L848) with this ``` Expr VisitExpr_(const TupleNode* tuple) { auto* ret_group = gmap_.at(tuple)->FindRoot(); if (ret_group == gmap_.at(tuple)) { return ExprMutator::VisitE

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Tianqi Chen
one possible way to make the fusor smarter. So that we forbid fusing of a node into TupleNode if that is the last in the group, but allow fuse TupleNode into subsequent injective ops, then in the second iteration the other node can fuse into tuple node because the group no longer have the tuple

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread masahi
We should be aware that if we disable tuple fusion when tuple is the return value, we might lose some efficiency gain. So this function ``` fn (%x: Tensor[(64, 64), float32]) -> (Tensor[(32, 64), float32], Tensor[(32, 64), float32]) { %2 = fn (%p0: Tensor[(64, 64), float32], __dict__=meta[Str

Re: [dmlc/tvm] [RFC][DISCUSS] Tuple-related Fusion (#3039)

2019-04-17 Thread Zhi
BTW, I am not certain that stopping fusing return tuple will fully solve the problem because it looks to me that we will still have two identical tensor in the tuple, right? Am I missing something? -- You are receiving this because you are subscribed to this thread. Reply to this email directly