2018-05-29 4:08 GMT+03:00 Pedro Arthur <bygran...@gmail.com>: > 2018-05-28 19:52 GMT-03:00 Sergey Lavrushkin <dual...@gmail.com>: > > 2018-05-28 9:32 GMT+03:00 Guo, Yejun <yejun....@intel.com>: > > > >> looks that no tensorflow dependency is introduced, a new model format is > >> created together with some CPU implementation for inference. With this > >> idea, Android Neural Network would be a very good reference, see > >> https://developer.android.google.cn/ndk/guides/neuralnetworks/. It > >> defines how the model is organized, and also provided a CPU optimized > >> inference implementation (within the NNAPI runtime, it is open source). > It > >> is still under development but mature enough to run some popular dnn > models > >> with proper performance. We can absorb some basic design. Anyway, just a > >> reference fyi. (btw, I'm not sure about any IP issue) > >> > > > > The idea was to first introduce something to use when tensorflow is not > > available. Here is another patch, that introduces tensorflow backend. > I think it would be better for reviewing if you send the second patch > in a new email.
Then we need to push the first patch, I think. > > > > > >> For this patch, I have two comments. > >> > >> 1. change from "DNNModel* (*load_default_model)(DNNDefaultModel > >> model_type);" to " DNNModel* (*load_builtin_model)(DNNBuiltinModel > >> model_type);" > >> The DNNModule can be invoked by many filters, default model is a good > >> name at the filter level, while built-in model is better within the DNN > >> scope. > >> > >> typedef struct DNNModule{ > >> // Loads model and parameters from given file. Returns NULL if it is > >> not possible. > >> DNNModel* (*load_model)(const char* model_filename); > >> // Loads one of the default models > >> DNNModel* (*load_default_model)(DNNDefaultModel model_type); > >> // Executes model with specified input and output. Returns DNN_ERROR > >> otherwise. > >> DNNReturnType (*execute_model)(const DNNModel* model); > >> // Frees memory allocated for model. > >> void (*free_model)(DNNModel** model); > >> } DNNModule; > >> > >> > >> 2. add a new variable 'number' for DNNData/InputParams > >> As a typical DNN concept, the data shape usually is: <number, height, > >> width, channel> or <number, channel, height, width>, the last component > >> denotes its index changes the fastest in the memory. We can add this > >> concept into the API, and decide to support <NHWC> or <NCHW> or both. > > > > > > I did not add number of elements in batch because I thought, that we > would > > not feed more than one element at once to a network in a ffmpeg filter. > > But it can be easily added if necessary. > > > > So here is the patch that adds tensorflow backend with the previous > patch. > > I forgot to change include guards from AVUTIL_* to AVFILTER_* in it. > You moved the files from libavutil to libavfilter while it was > proposed to move them to libavformat. Not only, it was also proposed to move it to libavfilter if it is going to be used only in filters. I do not know if this module is useful anywhere else besides libavfilter. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel