@tqchen I plan to support TFLite NHWC data layout after my quantization part
upstreamed. However, NCHW has its advantages as described. We could have two
options:
- Keep NCHW of TFLite and add one parameter named `layout` in `from_tflite`.
`layout` could be `NCHW` or `NHWC`. The default value
I vote for using tflite original layout as it is. The internal conversation
logic make it complex while adding new features.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2519#issuecommen
This would be a really useful feature! Looking forward to it!
---
[Visit Topic](https://discuss.tvm.ai/t/disable-assert-in-runtime/2152/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email
Recently, there are a few problems arise wrt to the fusor and tuples. The main
problem is the incompatibility of the calling convention when dealing with
tuples.
- At the lowest level, the primitive function is not aware of the tuple, and we
simply flatten everything including inputs and output
https://github.com/dmlc/tvm/issues/3039
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2931#issuecomment-484220471
Closed #2931.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2931#event-2283704704
@tqchen where is %2?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3039#issuecomment-484229709
@junrushao1994
> @tqchen where is %2?
There might be some code emitted, but the idea is to the problem when dealing
with duplicate values in return tuples.
> why is the example bad for codegen
The output tensor is scheduled twice in compute_engine here:
https://github.com/dmlc/tvm/blob/552d4a
@tqchen could you provide examples of the `attrs=ValueFormat` suggestion?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3016#issuecomment-484319295
# Supporting Dynamic Dimensions
I recently opened an RFC proposing a new dynamic runtime (see #2810).
A missing piece of the puzzle for supporting fully dynamic models is typing,
and code generation for tensors with statically unknown shapes.
There are three critical steps to supporting dynami
We can update TOPI to prevent scheduling the same op twice. It is partially
done in #1556
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3039#issuecomment-484324430
@masahi Can we prevent from passing duplicated tensors instead? It looks that
we otherwise need to change all schedules for all targets in topi, right?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc
@zhiics I don't know a solution for your problem without breaking other parts
(like breaking calling convention as you and @vinx13 said). I'll trying fixing
the fuser so that a tuple like (%0, %0) won't happen.
--
You are receiving this because you are subscribed to this thread.
Reply to this
It seems #2412 is also related to this issue. @srkreddy1238
[mentioned](https://github.com/dmlc/tvm/pull/2412#issuecomment-453912379) that
enabling fusion caused a LLVM error.
When fusion support for tuple was added in #2187, I handled the case where
* the tuple is root of its group
* the tuple
@zhiics can you replace [this
block](https://github.com/dmlc/tvm/blob/master/src/relay/pass/fuse_ops.cc#L822-L848)
with this
```
Expr VisitExpr_(const TupleNode* tuple) {
auto* ret_group = gmap_.at(tuple)->FindRoot();
if (ret_group == gmap_.at(tuple)) {
return ExprMutator::VisitE
one possible way to make the fusor smarter. So that we forbid fusing of a node
into TupleNode if that is the last in the group, but allow fuse TupleNode into
subsequent injective ops, then in the second iteration the other node can fuse
into tuple node because the group no longer have the tuple
We should be aware that if we disable tuple fusion when tuple is the return
value, we might lose some efficiency gain.
So this function
```
fn (%x: Tensor[(64, 64), float32]) -> (Tensor[(32, 64), float32], Tensor[(32,
64), float32]) {
%2 = fn (%p0: Tensor[(64, 64), float32], __dict__=meta[Str
BTW, I am not certain that stopping fusing return tuple will fully solve the
problem because it looks to me that we will still have two identical tensor in
the tuple, right? Am I missing something?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly
18 matches
Mail list logo