2018-06-03 19:57 GMT+03:00 Pedro Arthur <bygran...@gmail.com>: > 2018-05-31 12:01 GMT-03:00 Sergey Lavrushkin <dual...@gmail.com>: > > Hello, > > > > This patch introduces TensorFlow backend for DNN inference module. > > This backend uses TensorFlow binary models and requires from model > > to have the operation named 'x' as an input operation and the operation > > named 'y' as an output operation. Models are executed using > libtensorflow. > > Hi, > > You added the tf model in dnn_srcnn.h, it seems the data is being > duplicated as it already contains the weights as C float arrays. > Is it possible to construct the model graph via C api and set the > weights using the ones we already have, eliminating the need for > storing the whole tf model?
Hi, I think, it is possible, but it will require to manually create every operation and specify each of their attributes and inputs in a certain order specified by operations declaration. Here is that model: https://drive.google.com/file/d/1s7bW7QnUfmTaYoMLPdYYTOLujqNgRq0J/view?usp=sharing It is just a lot easier to store the whole model and not construct it manually. Another way, I think of, is to pass weights in placeholders and not save them in model, but it has to be done when session is already created and not during model loading. Maybe some init operation can be specified with variables assignment to values passed through placeholders during model loading, if it is possible. But is it really crucial to not store the whole tf model? It is not that big. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel