Hey @ziheng! I think this is a great idea. As someone who is pushing on the Rust bindings right now (along with @jroesch), I love the idea of deduplicating work.
One design choice I see is whether to centralize or decentralize code-generation. It seems like your original design is leaning towards centralizing it. It would like to start a little discussion on why/if this is the right idea. Decentralizing code generation could have some benefits, here's how I see it looking. The schemas themselves live in some central location like `/schema`, and they are defined as simply data (perhaps JSON). Each "backend", including C++ and Python, is then responsible for reading this data and generating code for itself. The downside is that there may be some duplicated logic in the code generation. But on the upside, each backend gets to use different tooling to do the codegen; for example, it would be nice to use Rust (using [syn](https://docs.rs/syn/) and [quote](https://docs.rs/quote)) to generate the Rust code. This could also simplify the story for implementing additional code for the methods: each backend just handles it itself, no need to toggle or parse anything. Here's a example on what the JSON could look like: ```json [ { "name": "IntImmNode", "key": "IntImm", "ref": "IntImm", "parent": "PrimExprNode", "fields": { "value": "int64" } }, ... ] ``` You could imaging grouping these schema into namespaces or something too, if you want. On the topic of checking the generated code in, I'm not sure why that is necessary. As long as the files are generated by the build system, shouldn't autocomplete and stuff work fine? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tvm-object-schema-dsl/7930/9) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/e6a97a49bbcdc9e395870a9a6fdaa3a5bc03d333d6b8fc5292e8532eedcedf58).