Hi @tqchen and @maheshambule,

Refer to TVM tutorial [Bring Your Own Codegen To 
TVM](https://docs.tvm.ai/dev/relay_bring_your_own_codegen.html), where details 
how to create a self-defined c source module codegen.

However, ONNX is not a _C_ source module, we should define an ONNX module node 
for codegen additionally. The following is the steps how do I create ONNX 
module codegen.

1. Create _ONNXModuleCodegen_ to traverse relay _IRModule_ and convert Relay op 
to ONNX op.
(see 
[codegen.cc](https://github.com/itri-tvm/Relay2ONNX/blob/relay2onnx/src/relay/backend/contrib/onnx/codegen.cc))
```c++
class ONNXModuleCodegen {
 public:
        ONNXModuleCodegen(){}
  runtime::Module CreateONNXModule(const ObjectRef& ref){

        auto mod = Downcast<IRModule>(ref);
        String codes = (*to_onnx_)(mod);
        /* Here, use String instead of std::string, because some byte info
         * will lost while calling PackedFunc object.
         */
        const auto* pf = runtime::Registry::Get("runtime.ONNXModuleCreate");
    CHECK(pf != nullptr) << "Cannot find onnx module to create the external 
runtime module";
    return (*pf)(codes, "onnx");
  }
 private:
  /*!
   * \brief The python function to convert relay module to onnx module.
   * \return byte array -> String
   */
  const PackedFunc* to_onnx_ = 
runtime::Registry::Get("tvm.relay.converter.to_onnx");
};
```
Register a global function, "relay.ext.onnx", whose body is a wrapper function 
to create onnx module.
```c++
runtime::Module ONNXCompiler(const ObjectRef& ref) {
  ONNXModuleCodegen onnx;
  return onnx.CreateONNXModule(ref);
}
TVM_REGISTER_GLOBAL("relay.ext.onnx").set_body_typed(ONNXCompiler);
```
Instead of writing op conversions in _C++_, use _register_func_ to register a 
global function "tvm.relay.converter.to_onnx", and write op conversions in 
_Python_ to convert relay module to onnx module. (see 
[converter/onnx.py](https://github.com/itri-tvm/Relay2ONNX/blob/relay2onnx/python/tvm/relay/converter/onnx.py))
```python
@tvm.register_func("tvm.relay.converter.to_onnx")
def convert_to_onnx(model):
    ...
    opset = onnx.defs.onnx_opset_version() # get the supported opset version
    data = ""
    global_vars = model.get_global_vars()
    for i, global_var in enumerate(global_vars):
        func = model[global_var]
        sub_model = tvm.IRModule().from_expr(func.body)
        sub_model = fuse_ops(sub_model)
        func = sub_model["main"]
        graph_name = global_var.name_hint
        # Traverse the Relay function and record the nodes.
        sub_onnx_model = ONNXGenerator({}, opset, graph_name, "").to_onnx(func)
        bytes_data = get_onnx_bytes(sub_onnx_model)
        data += graph_name +"<"+ str(bytes_data)[2:-1]+">"; 
    return data
```

2. Create _ONNXModuleNode_ which is subclass of _ModuleNode_ to create a 
specific runtime module.
(see 
[source_module.cc](https://github.com/itri-tvm/Relay2ONNX/blob/relay2onnx/src/target/source/source_module.cc))
```c++
class ONNXModuleNode : public runtime::ModuleNode {
 public:
        ONNXModuleNode(std::string code,
                   std::string fmt)
      : code_(code), fmt_(fmt) {}
  const char* type_key() const {
    return "onnx";
  }
  PackedFunc GetFunction(
      const std::string& name,
      const ObjectPtr<Object>& sptr_to_self) final {
    LOG(FATAL) << "ONNX Source module cannot execute, to get executable module"
               << " build TVM with \'" << fmt_ << "\' runtime support";
    return PackedFunc();
  }

  std::string GetSource(const std::string& format) final {
        return code_;
  }
  ...
  void SaveToFile(const std::string& file_name,
                  const std::string& format) final {
    std::string fmt = GetFileFormat(file_name, format);
    std::string folder;
    size_t pos = file_name.find_last_of("\\/");
    if(pos!=std::string::npos){
        folder = file_name.substr(0,pos+1);
    }else{
        folder = file_name+"/";
    }
    auto datas = Split(code_,'>');
    if (fmt == "onnx") {
      CHECK_NE(code_.size(), 0);
      std::stringstream ss;
          for(auto data : datas){
                  auto split_data = Split(data,'<');
                  ss<<folder<<split_data[0].c_str()<<"."<<fmt;
                  SaveBinaryToFile(ss.str(), ConvertEscape(split_data[1]));
                  ss.str("");
                  ss.clear();

          }
    } else {
      CHECK_EQ(fmt, fmt_)
          << "Can only save to format=" << fmt_;
    }
  }
...
runtime::Module ONNXModuleCreate(String code, std::string fmt) {
        /* Here, use String instead of std::string, because some byte info
         * will lost while calling PackedFunc object.
         */
  auto n = make_object<ONNXModuleNode>(code, fmt);
  return runtime::Module(n);
}
TVM_REGISTER_GLOBAL("runtime.ONNXModuleCreate")
.set_body_typed(ONNXModuleCreate);
```
3. Create a cmake file for ONNX, and include cmake in CMakeLists.txt. When user 
wants to use onnx module codegen, set _USE_ONNX_CODEGEN_ "ON", and build TVM 
source code. (see 
[ONNX.cmake](https://github.com/itri-tvm/Relay2ONNX/blob/relay2onnx/cmake/modules/contrib/ONNX.cmake))
```cmake
if(USE_ONNX_CODEGEN STREQUAL "ON")
  file(GLOB ONNX_RELAY_CONTRIB_SRC src/relay/backend/contrib/onnx/codegen.cc)
  list(APPEND COMPILER_SRCS ${ONNX_RELAY_CONTRIB_SRC})
  message(STATUS "Build with ONNX codegen.")
endif()
```

In addition, I update my code to the Apail 28 version. (see [source 
code](https://github.com/itri-tvm/Relay2ONNX/tree/relay2onnx) and [example code 
](https://github.com/itri-tvm/Relay2ONNX/blob/relay2onnx/tests/python/converter/onnx/model_test.py))





---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/21) to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/1ae57fbcb83b6d693fd49c94ff2571fe1ad2c98301ad721c55ade7c2e8794bec).

Reply via email to