Thank you for your interest.
A1: Current op fusing is based on `stage` but the critical point is fusing the
injective computation. We can also inline injective computation by
`traverse_inline`. So there is no doubt that FuseOps works. As for the
philosophy, I think there are only few changes
Well-received with thanks!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsub
Hi,
Even though I don't think I understood everything, I like the idea of solving
some of the limitations of `te.compute`. Since the `te.compute` is in a central
part of the TVM stack changing it requires a lot of work and understanding. So
thank you all for continuing such development.
Q1:
@heliqi @lsy643 I encountered a similar problem, did you solve it ?
this error:
```shell
strided_slice(%307, meta[relay.Constant][0], meta[relay.Constant][1],
meta[relay.Constant][2], begin=[0, 3, 0, 0], end=[1, 0, 1, 512], strides=[1, 1,
1, 1])
an internal invariant was violated while t
[quote="ds1231h, post:3, topic:7872"]
However, will this increase the coupling between the schedule and the lower
pass, which may lead to an increase in the complexity of the lower pass?
[/quote]
Thanks for your reply! @ds1231h
At the moment, we at first transform TIR with block to TIR without
Thanks for your reply! @jcf94
A1. We've tried to tensorize intrinsic using this new IR, and are working on
the TensorCore demo. Our design is really close to the original tensorize
programming logic, only differs in the declaration of
description&implementation of HW intrinsic (we can use Hy
Great work! I believe this will make "scheduling" more flexible and intuitive!
However, will this increase the coupling between the schedule and the lower
pass, which may lead to an increase in the complexity of the lower pass?
By the way, I'm also looking forward to know how to auto-schedule
Great work!
Have you tried tensorize intrinsic(e.g. TensorCore schedule) using this new IR?
Since I remember that to support tensorize is also one of your initial
motivations.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/2)
to respond.
Y
@tqchen recommended that we first format the entire code base using these
settings then try to land the CI parts, going to open a second PR with the
fully formatted repo.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
htt
@junrushao1994 @comaniac @areusch I just added the scripts and cleaned some
things up, take another pass if you can
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#issuecomm
Yes, @tqchen and I will post a RFC soon for the binary distribution.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/make-binary-distributation/7867/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://dis
@areusch @tqchen @comaniac I can rollback the formatting, the first 3 or 4
commits were focused on formatting then I went through the process to see if it
would actually work.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
I believe @haichen and @tqchen are actively working on this.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/make-binary-distributation/7867/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm
cc @anijain2305, @FrozenGene, @ramana-arm
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-accelerate-quantized-convolution-through-dot-product/7873/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://
It is a bit hard to review 1000 files...maybe just take a look at the
pyproject.toml file and assume other parts are correct?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#
## Motivation
In recent RFCs we successfully boosted convolution performance on native
Armv8-A architectures. When using Armv8.2-A and above ISAs, developers are
provided with a richer set of instructions, among which the dot-product
instruction `udot` (or `sdot`) can be particularly useful
## Background and Motivation
TVM is an end-to-end deep learning compiler with two levels of IR and
optimization. TVM translates popular DL frameworks into Relay and optimizes the
computation graph, after which it lowers each graph node into Tensor
Expression(TE) and does another function-level
There is an underway effort by the community members to do a binary release of
TVM with linked dependencies under the name `tlcpack`. The goal of these
packages are to include TVM linked with components that do not have open source
friendly licenses. My understand is that its official release
Is there any plan to use CI to build the tvm binary and pack with pypi.
And this could help tvm to distribute to the end-user easily.
And if it is not on the roadmap, are you welcome contributions on this agenda?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/make-binary-distributation
19 matches
Mail list logo