Sorry for the lack of updates been super oversubscribed lately (in the process
of finishing my thesis) plus lots of stuff at OctoML. @mbrookhart is a hero
and has a production quality version he has been working on in C++. I think he
is getting really close to shipping a first version of the
@tqchen Here is a usage scenario that we are thinking about with Relay to ONNX
serialization. Let us say that there is a HW chip vendor, whose model compiler
toolchain already supports ONNX as an input format. Since ONNX is quite limited
in its scope, there is only a small set of models that c
I tried to debug little more and found out these
* When the tvm function gets the random_unifrorm variable in graph, it will
not process the next elements in the graph. So its having a different shape as
output.
* Also, I have checked the params return value which is having like 70% of the
OK, I think the goal of lowering to onnx to target related runtimes makes
sense. That does mean we should treat onnx more as a target, instead of a
serialization format(where the name export makes more sense)
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/5) to respond.