HI @hjiang,
Sorry for the late response, I've had some other work to do. Thanks for the
proposed solutions, I'll try this implementations with my model and I'll keep
you updated.
Regards
Augusto
---
[Visit Topic](https://discuss.tvm.ai/t/vta-first-conv-layer-optimize/6766/6) to
respon
Hi @acapone13,
to apply first conv2d layer into VTA, there are 2 solution/ steps, first is to
padding first conv2d from 3 channel into VTA hardware match channel for example
16, after that we
would can run first quantized conv2d layer on VTA , for sure
the padding would increase compute OP num
Hi @hjiang,
I use Sony's framework [NNabla](https://github.com/sony/nnabla) to train the
networks but I then convert them to ONNX or Tensorflow in order to use them
with TVM. Accuracy loss gets around 4%.
Regards
Augusto
---
[Visit Topic](https://discuss.tvm.ai/t/vta-first-conv-layer-op
Hi @acapone13,
Thanks for following up this post and nice to know you are interested in VTA
performance optimization related topic, about the resenet18 pretrained model,
could I know which framework you use to generate
the model? and how much the accuracy lost is after the quantization?
Rega
Hi @hjiang,
I'm working to deploy a pre-quantized ResNet network with VTA in which the
first conv layer supports int8 input/weights. I think it would be an
interesting feature even though most quantization works avoid quantizing the
first layer. Both ideas are valid but it would be interesti
Hi There,
VTA first conv layer is running on CPU and not get offload into FPGA, in most
case that is a performance bottle neck and need optimization, following are
some idea about the
optimization, please kindly comments.
Regards
Hua
1. training network to make first conv layer support int8