Ok. Thanks for clarification. I will update the PR.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/26) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/87c25
I don;t think we need to do that. Just like the case of SourceModule, they are
not registered anywhere.
As the code base refactors further, we could introduce it to the target built,
when it is clear that the case of ONNX requires the IRModule to contain relay
functions instead of TIR funct
@tqchen, Just to be on same page. Could you please confirm below?
We need NOT to register ONNXModule as "target.build.onnx". If registered this
way, it will get invoked from here when we specify target as "onnx".
https://github.com/apache/incubator-tvm/blob/2cd987d92724be0f859bfb624ce797f9c7016
W don't have to strictly go through the TIR part, as the target only means
IRModule-> runtime::Module. It is totally fine for target to take in IRModule
that contains relay functions. I agree that it would be useful to have a
ONNXModule as a runtime module.
---
[Visit Topic](https://discu
@smallcoscat, Thanks. I also followed this tutorial and was able to create ONNX
codegen for external runtime. Relevant code in this PR:
https://github.com/maheshambule/tvm/pull/9
However, as suggested by @tqchen when I tried to implement 'ONNX' as target
(and not as external codegen), I am fa
Hi @tqchen and @maheshambule,
Refer to TVM tutorial [Bring Your Own Codegen To
TVM](https://docs.tvm.ai/dev/relay_bring_your_own_codegen.html), where details
how to create a self-defined c source module codegen.
However, ONNX is not a _C_ source module, we should define an ONNX module node
f
@tqchen, I tried to add ONNX as target, but since target codegen receives
lowered IRModule with PrimFunc nodes, I am not able to convert those to ONNX.
However, as in the case of external codegen lowering is deferred to external
codegens, I am receiving IRModule without PrimFunc nodes and I am
@maheshambule seems we have reached concensus, please feel free to update the
PR to reflect the discussion, we only need to support the conversion but not
the runtime part
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/19) to respond.
You are receiving this because you
@maheshambule Ok, no problem. I will keep creating new operators.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/18) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/uns
@smallcoscat, Thanks. Looking forward to collaborate with you.
I will get my PR with basic coverage in TVM repo and then you can send in your
PR as well. So that we increase the overall coverage in terms of ops and
models. Sounds good?
I will need some time to work on codegen part to implement
@maheshambule I am very glad we can make some positive contributions to TVM. If
you require any further information, feel free to contact me.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/16) to respond.
You are receiving this because you enabled mailing list mode.
To
Hi @tqchen and @maheshambule,
I'm glad to share our code with you. The below figure is our current flowchart:
![|1221x249](https://msx.itri.org.tw/owa/service.svc/s/GetFileAttachment?id=AAMkADRhZjVmYjNiLWY1YTMtNGMyMS05NjZmLTlkZDU4MGI0YzRmOQBGAAAdWmvjQCH1R4whAGTAtKnfBwCd05YIuBehQ53IESALM0txA
Sure. That makes sense.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/14) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/df6eb7330deebf802fe628c4cb592242b
Given that there are other folks that are interested in the topic, e.g.
@smallcoscat perhaps it makes sense to land a version with reasonable coverage,
then invite others to contribute and collaborate
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/13) to respond.
You ar
So we will be adding support for ONNX codegen only.
I will work on adding a codegen for ONNX and then will work on an example ONNX
runtime to demonstrate end to end functionality. I will also be improving
operator coverage for ONNX.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to
Note that we do not need a runnable runtime for to put the onnx as a target.
Currently we have outputs like CSourceModule that does not have a runnable
runtime.
Regardless of the ways to use the onnx target feature. The final presentation
of the result can always be a variant of runtime mod
**Option C0:**
The original intention was to use Relay to ONNX as serialization format only.
**Option C1:**
It seems interesting and can fit naturally in TVM. But wanted to discuss a few
of the points below.
First, let me put down the different properties or attributes of a target in
gene
@maheshambule can you followup a bit by commenting a bit aout C0 and C1
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/9) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/emai
These suggestion all makes sense. I think we should bring relay to ONNX
support. The only choices we need to discuss so far are:
- C0: put the onnx under the export namespace, which could imply that it is a
serialization format(and all of relay can serialize to it).
- C1: put the onnx under `t
Hi,
Our team is developing Relay to ONNX conversion program which currently
contains 66 operators.
You can take our program for reference.
This is the hyperlink to [source
code](https://github.com/itri-tvm/Relay2ONNX/tree/master/python/tvm/relay/frontend)
and [example
code](https://githu
@tqchen Here is a usage scenario that we are thinking about with Relay to ONNX
serialization. Let us say that there is a HW chip vendor, whose model compiler
toolchain already supports ONNX as an input format. Since ONNX is quite limited
in its scope, there is only a small set of models that c
OK, I think the goal of lowering to onnx to target related runtimes makes
sense. That does mean we should treat onnx more as a target, instead of a
serialization format(where the name export makes more sense)
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/5) to respond.
Thanks @tqchen for comments.
To elaborate more here, the support for Relay to ONNX serialization will help
us to take advantage of hardware-specific optimizations supported by different
compilers. The ONNX format is mostly adopted. If a particular compiler supports
a specific format, support f
Thanks for the proposal. It would be great to list alternatives with labels
(see example https://discuss.tvm.ai/t/target-and-attributes/6013), discuss pros
and cons, so others can share their opinions easily.
Given that onnx is not rich enough to cover all the operators, such conversion
might
@jroesch, @tqchen, Regarding the naming convention discussion on the PR, I
agree the converter does not seem to be the correct word. The suggested words
by you are either 'export' or 'target'. I think 'export' should be used as it
is more in line with other DL frameworks. Please let me know
## Motivation:
We want to port the DL models in Relay IR. For that, we want to serialize the
Relay IR to disk. Once serialized third-party frameworks, compilers should be
able to import those. We want the serialization format to be compact, portable,
widely adopted and having well-documented
26 matches
Mail list logo