2018-08-03 10:11 GMT-03:00 Michael Niedermayer <mich...@niedermayer.cc>: > On Thu, Aug 02, 2018 at 09:52:42PM +0300, Sergey Lavrushkin wrote: >> This patch provides on the fly generation of default DNN models for >> tensorflow backend, >> that eliminates data duplication for model weights. Also, files with >> internal weights >> were replaced with automatically generated files for models I trained. >> Scripts for >> training and generating these files can be found here: >> https://github.com/HighVoltageRocknRoll/sr >> > > [...] >> +static TF_Operation* add_conv_layers(TFModel* tf_model, const float** >> consts, const int64_t** consts_dims, >> + const int* consts_dims_len, const >> char** activations, >> + TF_Operation* input_op, int layers_num) >> +{ >> + int i; >> + TF_OperationDescription* op_desc; >> + TF_Operation* op; >> + TF_Operation* transpose_op; >> + TF_Output input; >> + int64_t strides[] = {1, 1, 1, 1}; >> + int32_t* transpose_perm; >> + TF_Tensor* tensor; >> + int64_t transpose_perm_shape[] = {4}; >> + char name_buffer[256]; >> + >> + op_desc = TF_NewOperation(tf_model->graph, "Const", "transpose_perm"); >> + TF_SetAttrType(op_desc, "dtype", TF_INT32); >> + tensor = TF_AllocateTensor(TF_INT32, transpose_perm_shape, 1, 4 * >> sizeof(int32_t)); >> + transpose_perm = (int32_t*)TF_TensorData(tensor); >> + transpose_perm[0] = 1; >> + transpose_perm[1] = 2; >> + transpose_perm[2] = 3; >> + transpose_perm[3] = 0; >> + TF_SetAttrTensor(op_desc, "value", tensor, tf_model->status); >> + if (TF_GetCode(tf_model->status) != TF_OK){ >> + return NULL; >> + } >> + transpose_op = TF_FinishOperation(op_desc, tf_model->status); >> + if (TF_GetCode(tf_model->status) != TF_OK){ >> + return NULL; >> + } >> + >> + input.index = 0; >> + for (i = 0; i < layers_num; ++i){ > >> + sprintf(name_buffer, "conv_kernel%d", i); > > sprintf() should normally not be used as its too easy to end up > overwriting the output. > snprintf() is a safer alternative > > [...] > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > Complexity theory is the science of finding the exact solution to an > approximation. Benchmarking OTOH is finding an approximation of the exact > > _______________________________________________ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel >
I may push the patch with proposed changes by tomorrow. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel