# Motivation
Cloud devices are more powerful than Edge devices, which provides higher
computation capabilities for deep learning workloads. For example, for the VTA
core, with Cloud devices, we have more resources to support larger GEMM cores
(e.g., 32\*32 or even 64\*64) and device buffers,
Thanks for the great meetup earlier today everyone. Video is up here:
https://youtu.be/mW7dk-rXuy8
---
[Visit
Topic](https://discuss.tvm.ai/t/utvm-embedded-focus-online-meetup/6908/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these em
Thanks for the discussion. Here are my thoughts.
### API Usage
The API for tuning a whole neural network will be the same as autotvm
(extract tasks and tune all of them).
The API for writing templates is still under development. But it will be
similar to autotvm.
### Performance in absolu
Thanks @merrymercy
The point of bringing up MKLDNN is that for dense op these libraries have a bag
of tricks which might be difficult to achieve in TVM. @haichen has done nice
work on TVM+MKLDNN for bert, and has become the standard way we use to support
bert on cloud CPU. It would be nice to
For #6 (export stats), I think you're absolutely right. I think there can be
other interesting on-device stats (I.e. IRQs triggered, # function executions,
etc). This is also the last one on the roadmap since it's a bit less planned
relative to the others.
On #2, I think some part should run
I support fully deprecate template based AutoTVM. Technically, template based
AutoTVM is a subset of Ansor's search space. We may temperately allow one
release to keep both AutoTVM and Ansor. But in a long run I can't see any
reason we should keep AutoTVM.
---
[Visit
Topic](https://discu
I agree. As long as we can demonstrate that Ansor customized rules can fully
cover the current AutoTVM templates in terms of the semantic and performance,
we can deprecate AutoTVM. While we are working to achieve this goal, we will
definitely have a period of time to keep both solutions.
@merrymercy et. All,
First, Ansel is a wonderful work congrats to All !
* Permit me to bring attention to: https://arxiv.org/pdf/2002.02145.pdf
They had a public repo (removed a few days ago), i am still wondering if
polyhedral priors could bring any benefit to Ansor (e.g. help to
reduce/opt
Good point. We (AWS) have a plan towarding to this direction in this summer.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/18)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
I noticed that while most Attrs inherit from Attrs, some don't and are only on
the C++ side (thus being mapped to Object). In particular, they don't have the
`keys` function.
Now defining them with a short docstring like the others is easy, but is that
an OK patch?
Best regards
Thomas
The o
I agree that it is good to add those attrs to the python side so that they maps
to Attrs
---
[Visit Topic](https://discuss.tvm.ai/t/attrs-not-inheriting-from-attrs/7029/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Formal RFC is here: https://github.com/apache/incubator-tvm/issues/5840
PRs are here:
https://github.com/apache/incubator-tvm-vta/pull/9
https://github.com/apache/incubator-tvm/pull/5842
@elnaz92 You may checkout the code and try first.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-vta-s
Thanks for the PRs this is a very welcome contribution! Expect some initial
comments/reviews tomorrow.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-vta-support-for-cloud-devices-opencl-compatible/6676/30)
to respond.
You are receiving this because you enabled mailing list mode.
To uns
13 matches
Mail list logo