Ok, as long as your implementation is not too complicated we can go that way.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4009#issuecomment-562453076
> @Rasterer Any update about this?
I'm afraid I can not commit PR due to open source policy change in my working
company. Welcome for code contribution if you are interesting in this.
In addition to my original proposal, I think padding performance can be
improved further by changing fusion pat
Looks good to me, thank you
On Thu, Dec 5, 2019, 2:45 PM YiZhi Liu wrote:
> Hi Henry,
>
> Thanks for reminding. Here's the vote result [1] I sent to
> general@incubator, let me know if anything could be further improved
> or revised.
>
> [1]
> https://lists.apache.org/thread.html/1e188b626e74838
Hi Henry,
Thanks for reminding. Here's the vote result [1] I sent to
general@incubator, let me know if anything could be further improved
or revised.
[1]
https://lists.apache.org/thread.html/1e188b626e74838cd14d3570d3d81d13a3a696896ef39b9a31a37978%40%3Cgeneral.incubator.apache.org%3E
On Thu, De
HI Yizhi,
Could you please close the VOTE thread in general@ list by sending [RESULT]
thread to summarize the tally of the release Vote
Thanks!
- Henry
On Thu, Dec 5, 2019 at 8:29 AM YiZhi Liu wrote:
> Hi all,
>
> The Apache TVM (incubating) community is happy to announce Apache TVM
> (incuba
Hi all,
The Apache TVM (incubating) community is happy to announce Apache TVM
(incubating) version 0.6.0!
Apache TVM (incubating) is an open deep learning compiler stack for
CPUs, GPUs, and specialized accelerators. It aims to close the gap
between the productivity-focused deep learning framework
@Rasterer Any update about this?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/2682#issuecomment-562114817
@masahi This should be the correct approach. How about we will try to implement
new functions with the unified manner but only being used by 3D. And keep the
existing 2D ones untouched. Then we could transform them to be unified easily
when we could have enough experience and data in the future
Aah, the split of IR.
It seems like both side need their own analysis, and thus have to reinvent the
wheel twice.
Looking forward to the merge of IR.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/
@optima2005 yes, I prefer a unified implementation. But that can potentially
affect existing users who are using 2D only ops. Most people dont care about
3D, so if we generalize some ops for 3D and in the process introduce perf
regression or other bugs to exising 2D users, they will get upset.
10 matches
Mail list logo