I believe using this needs cmake 3.12 or later because of the use of
FindPython3 in your cmake modules and this would require an update to the
install source documentation as that implies a requirement of cmake > 3.5 for
building tvm.
---
[Visit Topic](https://discuss.tvm.ai/t/add-the-do
Thanks that sounds like it should be relatively straightforward to integrate.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/per-axis-quantization-support-for-tflite/6726/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
Hello there,
Welcome to the community ! AFAIK, there is nothing in place for signed int8
symmetric quantization support in the tflite frontend yet even in master :
however I believe the underlying codegeneration framework can support it with
the qnn dialect of relay based on this
https://di
Any more opinions ?
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsub
clang-tidy certainly looks interesting as well as something deeper than
clang-format and that is likely to help us with other aspects that we may be
missing. However I'm probably a bit old-school and would probably be a bit more
careful about clang-tidy -fix ... :)
That might be the next ste
Maybe take the next steps ?
1. Do a flag day clang-format rewrite and take the one time cost for every
patch having a merge conflict ?
2. Once we are clang-format clean, we could have CI run clang-format and fail
CI instantly if there is any change in the source base compared to the pull
r
**Motivation**
We would like to move towards a world where there is a clear attempt to try and
start becoming more predictable with release cycles and what the usage of a
release is going to be . As part of this ,releases need regression fixes.
However, if the community is making releases, th
To move this forward, I spent some time over the past few days to get both
TF1.15 and TF2.x testing with our CI and ran into a few issues.
See
https://github.com/apache/incubator-tvm/pull/5392
https://github.com/apache/incubator-tvm/pull/5391
regards
Ramana
---
[Visit
Topic](https:/
@jknight - In case it wasn't obvious I do support the initiative.
Yes, the scheme you have outlined works (and seems to work) reasonably well for
information dissemenination about new features.
When there are interactive discussions in that fashion and design changes made
due to the discussio
My motivation was indeed for peer collaboration or interactive peer
conversations and an additional use of existing tools in the toolbox.
regards
Ramana
---
[Visit Topic](https://discuss.tvm.ai/t/tvm-online-meetups/6382/4) to respond.
You are receiving this because you enabled mailing lis
I think this is a good initiative. However it is quite expensive in terms of
logistics and organization.
Additionally it's probably time to think about using the slack channels more
and ensuring that conversations on slack move to the discuss forums or the PRs
once the interactive conversati
I wasn't proposing that as a solution, that is one of the options. I'm merely
stating that this is still a problem that will hit others most notably anyone
using the C backend .
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/61)
to r
So, the problem hasn't been fixed : there is a "solution" depending on the
presence of an llvm target.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/discuss-module-based-model-runtime-interface/5025/59)
to respond.
You are receiving this because you enabled mailing list mode.
To un
This won't work by default for the C backend where we don't necessarily rely on
the presence of llvm or are we saying that there needs to be an llvm solution
for the backend just to produce this constant data object always, so we do need
a general solution
Ramana
---
[Visit
Topic]
I would start with incorporating these points in the "Development Process" bits
in the TVM documentation. Will put up a pull request since no one has commented
on this in about 2 months.
Ramana
---
[Visit
Topic](https://discuss.tvm.ai/t/development-process-and-backporting-patches-to-rele
Hi Alexander,
Thanks for your response. Ah just saw the support for REDUCE_MAX. Let me
investigate why this is failing for us with operator unsupported again.
Sorry no our models aren't open sourced. Would you know of any tools like
creduce to create smaller models that could be used as test
We've been trying to run some internal pre-quantized models with the tflite
frontend and ran into the following missing operators in the tflite frontend.
We'd like to add support for these and see if there are others in the community
who are interested in this activity to prevent any duplicati
17 matches
Mail list logo