yes, please let me know where I can help
---
[Visit
Topic](https://discuss.tvm.ai/t/who-is-interested-in-running-mlperf-on-vta/3145/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
@tqchen @FrozenGene @ZihengJiang @zhiics @wweic @eqy
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3457#issuecomment-506844165
The purpose of this PR is to dive deep into the desing of the quantized ops. To
start the discussion I have implemented the Quantize and dequantize op which
are easy to implement. There is one more such
[PR](https://github.com/dmlc/tvm/issues/2351) but there the conversation has
meandered towar
I am OK with the change.
- Let us use enum as it will make things more clear (we could use str in python
API)
- Let us make sure code simplifies in the auto-broadcast mode
Re: migration
We need to think about the implication of simplifiers. Given that
auto_broadcast bind at a very late stage,
We've been trying to run some internal pre-quantized models with the tflite
frontend and ran into the following missing operators in the tflite frontend.
We'd like to add support for these and see if there are others in the community
who are interested in this activity to prevent any duplicati
As the VTA infrastructure is taking shape, we need to slide into the industry
ecosystem, and be able to demonstrate viability. MLperf appears to be a
reasonable path to take. I would like to start collaborating with folks that
are working on VTA implementations to create an MLPerf capability f