For
> `relay.op.qnn`, e.g. `relay.op.qnn.conv2d` The `qnn` name is consistent with 
> QNNPack

and
> My hope is that different frameworks converge to same qnn ops.

QNNPACK takes the quantization approach of TensorFlow/TFLite. I think that when 
we talking about op in this scenario, it means the quantization arithmetic 
formula itself rather than how to translate it into code, which is same for 
QNNPACK and TensorFlow/TFLite. So I guess one dialect should be enough for 
them. And, I guess the **converge** is more reasonable, if, the `qnn` stands 
for simply _generic_ quantized nn, but not QNNPACK.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2351#issuecomment-507098356

Reply via email to