Re: [FFmpeg-devel] [PATCH] avformat/segment: populate empty outer stream extradata from packet

2019-05-28 Thread Gyan



On 27-05-2019 10:06 AM, Gyan wrote:



On 25-05-2019 07:22 PM, Gyan wrote:



On 23-05-2019 06:40 PM, Gyan wrote:



On 21-05-2019 06:59 PM, Gyan wrote:

Fixes playback in QT for me.

Gyan



Ping.


Pong.


Plan to push tomorrow.


Pushed as eae251ead9e380c722dce7ac3f4e97017bff9a7b

Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavcodec/vp9: Fix VP9 dynamic resolution changing decoding on VAAPI.

2019-05-28 Thread Hendrik Leppkes
On Tue, May 28, 2019 at 8:57 AM Yan Wang  wrote:
>
> When the format change, the VAAPI context cannot be destroyed.
> Otherwise, the reference frame surface will lost.
>
> Signed-off-by: Yan Wang 
> ---
>  libavcodec/decode.c | 6 ++
>  1 file changed, 6 insertions(+)
>
> diff --git a/libavcodec/decode.c b/libavcodec/decode.c
> index 6c31166ec2..3eda1dc42c 100644
> --- a/libavcodec/decode.c
> +++ b/libavcodec/decode.c
> @@ -1397,7 +1397,9 @@ int ff_get_format(AVCodecContext *avctx, const enum 
> AVPixelFormat *fmt)
>
>  for (;;) {
>  // Remove the previous hwaccel, if there was one.
> +#if !CONFIG_VP9_VAAPI_HWACCEL
>  hwaccel_uninit(avctx);
> +#endif
>
>  user_choice = avctx->get_format(avctx, choices);
>  if (user_choice == AV_PIX_FMT_NONE) {
> @@ -1479,7 +1481,11 @@ int ff_get_format(AVCodecContext *avctx, const enum 
> AVPixelFormat *fmt)
> "missing configuration.\n", desc->name);
>  goto try_again;
>  }
> +#if CONFIG_VP9_VAAPI_HWACCEL
> +if (hw_config->hwaccel && !avctx->hwaccel) {
> +#else
>  if (hw_config->hwaccel) {
> +#endif
>  av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel "
> "initialisation.\n", desc->name);
>  err = hwaccel_init(avctx, hw_config);
> --
> 2.17.2
>

This change feels just wrong. First of all, preprocessors are
absolutely the wrong way to go about this.
Secondly, if the frames need to change size, or surface format, then
this absolutely needs to be called, doesn't it?

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavcodec/vp9: Fix VP9 dynamic resolution changing decoding on VAAPI.

2019-05-28 Thread Yan Wang


On 5/28/2019 3:16 PM, Hendrik Leppkes wrote:

On Tue, May 28, 2019 at 8:57 AM Yan Wang  wrote:

When the format change, the VAAPI context cannot be destroyed.
Otherwise, the reference frame surface will lost.

Signed-off-by: Yan Wang 
---
  libavcodec/decode.c | 6 ++
  1 file changed, 6 insertions(+)

diff --git a/libavcodec/decode.c b/libavcodec/decode.c
index 6c31166ec2..3eda1dc42c 100644
--- a/libavcodec/decode.c
+++ b/libavcodec/decode.c
@@ -1397,7 +1397,9 @@ int ff_get_format(AVCodecContext *avctx, const enum 
AVPixelFormat *fmt)

  for (;;) {
  // Remove the previous hwaccel, if there was one.
+#if !CONFIG_VP9_VAAPI_HWACCEL
  hwaccel_uninit(avctx);
+#endif

  user_choice = avctx->get_format(avctx, choices);
  if (user_choice == AV_PIX_FMT_NONE) {
@@ -1479,7 +1481,11 @@ int ff_get_format(AVCodecContext *avctx, const enum 
AVPixelFormat *fmt)
 "missing configuration.\n", desc->name);
  goto try_again;
  }
+#if CONFIG_VP9_VAAPI_HWACCEL
+if (hw_config->hwaccel && !avctx->hwaccel) {
+#else
  if (hw_config->hwaccel) {
+#endif
  av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel "
 "initialisation.\n", desc->name);
  err = hwaccel_init(avctx, hw_config);
--
2.17.2


This change feels just wrong. First of all, preprocessors are
absolutely the wrong way to go about this.


Sorry for this. I am new guy for ffmpeg development. What way should be

better? I can refine it.


Secondly, if the frames need to change size, or surface format, then
this absolutely needs to be called, doesn't it?


Based on VP9 spec, the frame resolution can be changed per frame. But 
current


frame will need refer to previous frame still. So if destroy the VAAPI 
context, it


will cause reference frame surface in VAAPI driver lost.

In fact, this patch is for the issue:

https://github.com/intel/media-driver/issues/629

its 2nd frame (128x128) will refer to the 1st frame (256x256).

Thanks.

Yan Wang



- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] lavc/vaapi_encode: disable ICQ mode when enabling low power

2019-05-28 Thread Fu, Linjie
> -Original Message-
> From: Li, Zhong
> Sent: Wednesday, May 22, 2019 09:49
> To: FFmpeg development discussions and patches  de...@ffmpeg.org>
> Cc: Fu, Linjie 
> Subject: RE: [FFmpeg-devel] [PATCH] lavc/vaapi_encode: disable ICQ mode
> when enabling low power
> 
> > From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On
> Behalf
> > Of Linjie Fu
> > Sent: Wednesday, May 22, 2019 4:31 AM
> > To: ffmpeg-devel@ffmpeg.org
> > Cc: Fu, Linjie 
> > Subject: [FFmpeg-devel] [PATCH] lavc/vaapi_encode: disable ICQ mode
> > when enabling low power
> >
> > ICQ mode is not supported in low power mode and should be disabled.
> >
> > For H264, Driver supports RC modes CQP, CBR, VBR, QVBR.
> > For HEVC, Driver supports RC modes CQP, CBR, VBR, ICQ, QVBR.
> >
> > ICQ is not exposed while working on low power mode for h264_vaapi, but
> > will trigger issues for hevc_vaapi.
> >
> > Signed-off-by: Linjie Fu 
> > ---
> > See https://github.com/intel/media-driver/issues/618 for details.
> > And patch for HEVC low power(ICL+):
> > https://github.com/intel-media-ci/ffmpeg/pull/42
> >
> >  libavcodec/vaapi_encode.c | 7 +--
> >  1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c index
> > 2dda451..55ab919 100644
> > --- a/libavcodec/vaapi_encode.c
> > +++ b/libavcodec/vaapi_encode.c
> > @@ -1371,6 +1371,7 @@ static av_cold int
> > vaapi_encode_init_rate_control(AVCodecContext *avctx)
> >  // * If bitrate and maxrate are set and have the same value, try CBR.
> >  // * If a bitrate is set, try AVBR, then VBR, then CBR.
> >  // * If no bitrate is set, try ICQ, then CQP.
> > +// * If low power is set, ICQ is not supported.
> >
> >  #define TRY_RC_MODE(mode, fail) do { \
> >  rc_mode = &vaapi_encode_rc_modes[mode]; \ @@ -1405,7
> > +1406,8 @@ static av_cold int
> > vaapi_encode_init_rate_control(AVCodecContext *avctx)
> >  TRY_RC_MODE(RC_MODE_QVBR, 0);
> >
> >  if (avctx->global_quality > 0) {
> > -TRY_RC_MODE(RC_MODE_ICQ, 0);
> > +if (!ctx->low_power)
> > +TRY_RC_MODE(RC_MODE_ICQ, 0);
> >  TRY_RC_MODE(RC_MODE_CQP, 0);
> >  }
> >
> > @@ -1417,7 +1419,8 @@ static av_cold int
> > vaapi_encode_init_rate_control(AVCodecContext *avctx)
> >  TRY_RC_MODE(RC_MODE_VBR, 0);
> >  TRY_RC_MODE(RC_MODE_CBR, 0);
> >  } else {
> > -TRY_RC_MODE(RC_MODE_ICQ, 0);
> > +if (!ctx->low_power)
> > +TRY_RC_MODE(RC_MODE_ICQ, 0);
> 
> Is it possible ICQ mode can be supported in future (new driver/HW version)?
> I would like to see avoid hard-coded workaround.
> If there is any driver limitation, would better to query driver capability 
> firstly
> and then disable a feature if it is not supported.

You are right, hard-coded should be avoided.
As to this, if ICQ mode is not supported by low_power mode, it shouldn't have
returned such support in the query for LP va_entrypoint.

- Linjie
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 2/2] doc/filters: update how to generate native model for sr filter

2019-05-28 Thread Guo, Yejun
Signed-off-by: Guo, Yejun 
---
 doc/filters.texi | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 4fdcfe9..75d2a38 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -16538,9 +16538,10 @@ Efficient Sub-Pixel Convolutional Neural Network model 
(ESPCN).
 See @url{https://arxiv.org/abs/1609.05158}.
 @end itemize
 
-Training scripts as well as scripts for model generation can be found at
+Training scripts as well as scripts for model file (.pb) saving can be found at
 @url{https://github.com/XueweiMeng/sr/tree/sr_dnn_native}. Original repository
-is at @url{https://github.com/HighVoltageRocknRoll/sr.git}.
+is at @url{https://github.com/HighVoltageRocknRoll/sr.git}. Once get the 
TensorFlow
+model file (.pb), the native model file (.model) can be generated via 
libavfilter/dnn/python/convert.py
 
 The filter accepts the following options:
 
-- 
2.7.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH 1/2] libavfilter/dnn: add script to convert TensorFlow model (.pb) to native model (.model)

2019-05-28 Thread Guo, Yejun
For example, given TensorFlow model file espcn.pb,
to generate native model file espcn.model, just run:
python convert.py espcn.pb

In current implementation, the native model file is generated for
specific dnn network with hard-code python scripts maintained out of ffmpeg.
For example, srcnn network used by vf_sr is generated with
https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_and_model.py#L85

In this patch, the script is designed as a general solution which
converts general TensorFlow model .pb file into .model file. The script
now has some tricky to be compatible with current implemention, will
be refined step by step.

The script is also added into ffmpeg source tree. It is expected there
will be many more patches and community needs the ownership of it.

Another technical direction is to do the conversion in c/c++ code within
ffmpeg source tree. While .pb file is organized with protocol buffers,
it is not easy to do such work with tiny c/c++ code, see more discussion
at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
choose the python script.

Signed-off-by: Guo, Yejun 
---
 libavfilter/dnn/python/convert.py |  52 ++
 libavfilter/dnn/python/convert_from_tensorflow.py | 200 ++
 2 files changed, 252 insertions(+)
 create mode 100644 libavfilter/dnn/python/convert.py
 create mode 100644 libavfilter/dnn/python/convert_from_tensorflow.py

diff --git a/libavfilter/dnn/python/convert.py 
b/libavfilter/dnn/python/convert.py
new file mode 100644
index 000..662b429
--- /dev/null
+++ b/libavfilter/dnn/python/convert.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2019 Guo Yejun
+#
+# This file is part of FFmpeg.
+#
+# FFmpeg is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# FFmpeg is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with FFmpeg; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+# 
==
+
+# verified with Python 3.5.2 on Ubuntu 16.04
+import argparse
+import os
+from convert_from_tensorflow import *
+
+def get_arguments():
+parser = argparse.ArgumentParser(description='generate native mode model 
with weights from deep learning model')
+parser.add_argument('--outdir', type=str, default='./', help='where to put 
generated files')
+parser.add_argument('--infmt', type=str, default='tensorflow', 
help='format of the deep learning model')
+parser.add_argument('infile', help='path to the deep learning model with 
weights')
+
+return parser.parse_args()
+
+def main():
+args = get_arguments()
+
+if not os.path.isfile(args.infile):
+print('the specified input file %s does not exist' % args.infile)
+exit(1)
+
+if not os.path.exists(args.outdir):
+print('create output directory %s' % args.outdir)
+os.mkdir(args.outdir)
+
+basefile = os.path.split(args.infile)[1]
+basefile = os.path.splitext(basefile)[0]
+outfile = os.path.join(args.outdir, basefile) + '.model'
+
+if args.infmt == 'tensorflow':
+convert_from_tensorflow(args.infile, outfile)
+
+if __name__ == '__main__':
+main()
diff --git a/libavfilter/dnn/python/convert_from_tensorflow.py 
b/libavfilter/dnn/python/convert_from_tensorflow.py
new file mode 100644
index 000..436ec0e
--- /dev/null
+++ b/libavfilter/dnn/python/convert_from_tensorflow.py
@@ -0,0 +1,200 @@
+# Copyright (c) 2019 Guo Yejun
+#
+# This file is part of FFmpeg.
+#
+# FFmpeg is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# FFmpeg is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with FFmpeg; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+# 
==
+
+import tensorflow as tf
+import numpy as np
+import sys, struct
+
+__all__ = ['convert_from_tensorflow']
+
+# as the first step to be compatible with vf_sr, it is 

Re: [FFmpeg-devel] [PATCH 2/2] doc/filters: update how to generate native model for sr filter

2019-05-28 Thread Gyan



On 28-05-2019 01:31 PM, Guo, Yejun wrote:

Signed-off-by: Guo, Yejun 
---
  doc/filters.texi | 5 +++--
  1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 4fdcfe9..75d2a38 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -16538,9 +16538,10 @@ Efficient Sub-Pixel Convolutional Neural Network model 
(ESPCN).
  See @url{https://arxiv.org/abs/1609.05158}.
  @end itemize
  
-Training scripts as well as scripts for model generation can be found at

+Training scripts as well as scripts for model file (.pb) saving can be found at

What's wrong with generation'?


  @url{https://github.com/XueweiMeng/sr/tree/sr_dnn_native}. Original repository
-is at @url{https://github.com/HighVoltageRocknRoll/sr.git}.
+is at @url{https://github.com/HighVoltageRocknRoll/sr.git}. Once get the 
TensorFlow
+model file (.pb), the native model file (.model) can be generated via 
libavfilter/dnn/python/convert.py

Change to

"Native model files (.model) can be generated from TensorFlow model 
files (.pb) by using libavfilter/dnn/python/convert.py "


and shift to a new line.

Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 2/2] doc/filters: update how to generate native model for sr filter

2019-05-28 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
> Gyan
> Sent: Tuesday, May 28, 2019 4:14 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 2/2] doc/filters: update how to generate
> native model for sr filter
> 
> 
> 
> On 28-05-2019 01:31 PM, Guo, Yejun wrote:
> > Signed-off-by: Guo, Yejun 
> > ---
> >   doc/filters.texi | 5 +++--
> >   1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/doc/filters.texi b/doc/filters.texi
> > index 4fdcfe9..75d2a38 100644
> > --- a/doc/filters.texi
> > +++ b/doc/filters.texi
> > @@ -16538,9 +16538,10 @@ Efficient Sub-Pixel Convolutional Neural
> Network model (ESPCN).
> >   See @url{https://arxiv.org/abs/1609.05158}.
> >   @end itemize
> >
> > -Training scripts as well as scripts for model generation can be found at
> > +Training scripts as well as scripts for model file (.pb) saving can be 
> > found at
> What's wrong with generation'?

Currently, the script generates both model file formats (.pb and .model). The 
TensorFlow model .pb
file will not change again, while the native .model file format will be changed 
timely especially in current
early stage.  It is not good to generate .model file here (see more detail in 
my patch 1 commit log). 

So, a new script (my patch 1) is added to generate the .model file. The above 
script is no longer needed to
generate .model file. 

For me, 'generation' means the dump of .model file, while 'saving' means the 
dump of .pb file.

> 
> >   @url{https://github.com/XueweiMeng/sr/tree/sr_dnn_native}. Original
> repository
> > -is at @url{https://github.com/HighVoltageRocknRoll/sr.git}.
> > +is at @url{https://github.com/HighVoltageRocknRoll/sr.git}. Once get the
> TensorFlow
> > +model file (.pb), the native model file (.model) can be generated via
> libavfilter/dnn/python/convert.py
> Change to
> 
> "Native model files (.model) can be generated from TensorFlow model
> files (.pb) by using libavfilter/dnn/python/convert.py "
> 
> and shift to a new line.

thanks, will do it.

> 
> Gyan
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavcodec/vp9: Fix VP9 dynamic resolution changing decoding on VAAPI.

2019-05-28 Thread Hendrik Leppkes
On Tue, May 28, 2019 at 9:46 AM Yan Wang  wrote:
>
>
> On 5/28/2019 3:16 PM, Hendrik Leppkes wrote:
> > On Tue, May 28, 2019 at 8:57 AM Yan Wang  wrote:
> >> When the format change, the VAAPI context cannot be destroyed.
> >> Otherwise, the reference frame surface will lost.
> >>
> >> Signed-off-by: Yan Wang 
> >> ---
> >>   libavcodec/decode.c | 6 ++
> >>   1 file changed, 6 insertions(+)
> >>
> >> diff --git a/libavcodec/decode.c b/libavcodec/decode.c
> >> index 6c31166ec2..3eda1dc42c 100644
> >> --- a/libavcodec/decode.c
> >> +++ b/libavcodec/decode.c
> >> @@ -1397,7 +1397,9 @@ int ff_get_format(AVCodecContext *avctx, const enum 
> >> AVPixelFormat *fmt)
> >>
> >>   for (;;) {
> >>   // Remove the previous hwaccel, if there was one.
> >> +#if !CONFIG_VP9_VAAPI_HWACCEL
> >>   hwaccel_uninit(avctx);
> >> +#endif
> >>
> >>   user_choice = avctx->get_format(avctx, choices);
> >>   if (user_choice == AV_PIX_FMT_NONE) {
> >> @@ -1479,7 +1481,11 @@ int ff_get_format(AVCodecContext *avctx, const enum 
> >> AVPixelFormat *fmt)
> >>  "missing configuration.\n", desc->name);
> >>   goto try_again;
> >>   }
> >> +#if CONFIG_VP9_VAAPI_HWACCEL
> >> +if (hw_config->hwaccel && !avctx->hwaccel) {
> >> +#else
> >>   if (hw_config->hwaccel) {
> >> +#endif
> >>   av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel "
> >>  "initialisation.\n", desc->name);
> >>   err = hwaccel_init(avctx, hw_config);
> >> --
> >> 2.17.2
> >>
> > This change feels just wrong. First of all, preprocessors are
> > absolutely the wrong way to go about this.
>
> Sorry for this. I am new guy for ffmpeg development. What way should be
>
> better? I can refine it.
>
> > Secondly, if the frames need to change size, or surface format, then
> > this absolutely needs to be called, doesn't it?
>
> Based on VP9 spec, the frame resolution can be changed per frame. But
> current
>
> frame will need refer to previous frame still. So if destroy the VAAPI
> context, it
>
> will cause reference frame surface in VAAPI driver lost.
>
> In fact, this patch is for the issue:
>
> https://github.com/intel/media-driver/issues/629
>
> its 2nd frame (128x128) will refer to the 1st frame (256x256).
>

This may work if the frame size decreases, but what if it increases?
Then the frame buffers in the pool are too small, and anything could
go wrong.
This won't be an easy issue to solve, and needs very careful design.

- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavcodec/vp9: Fix VP9 dynamic resolution changing decoding on VAAPI.

2019-05-28 Thread Yan Wang


On 5/28/2019 4:43 PM, Hendrik Leppkes wrote:

On Tue, May 28, 2019 at 9:46 AM Yan Wang  wrote:


On 5/28/2019 3:16 PM, Hendrik Leppkes wrote:

On Tue, May 28, 2019 at 8:57 AM Yan Wang  wrote:

When the format change, the VAAPI context cannot be destroyed.
Otherwise, the reference frame surface will lost.

Signed-off-by: Yan Wang 
---
   libavcodec/decode.c | 6 ++
   1 file changed, 6 insertions(+)

diff --git a/libavcodec/decode.c b/libavcodec/decode.c
index 6c31166ec2..3eda1dc42c 100644
--- a/libavcodec/decode.c
+++ b/libavcodec/decode.c
@@ -1397,7 +1397,9 @@ int ff_get_format(AVCodecContext *avctx, const enum 
AVPixelFormat *fmt)

   for (;;) {
   // Remove the previous hwaccel, if there was one.
+#if !CONFIG_VP9_VAAPI_HWACCEL
   hwaccel_uninit(avctx);
+#endif

   user_choice = avctx->get_format(avctx, choices);
   if (user_choice == AV_PIX_FMT_NONE) {
@@ -1479,7 +1481,11 @@ int ff_get_format(AVCodecContext *avctx, const enum 
AVPixelFormat *fmt)
  "missing configuration.\n", desc->name);
   goto try_again;
   }
+#if CONFIG_VP9_VAAPI_HWACCEL
+if (hw_config->hwaccel && !avctx->hwaccel) {
+#else
   if (hw_config->hwaccel) {
+#endif
   av_log(avctx, AV_LOG_DEBUG, "Format %s requires hwaccel "
  "initialisation.\n", desc->name);
   err = hwaccel_init(avctx, hw_config);
--
2.17.2


This change feels just wrong. First of all, preprocessors are
absolutely the wrong way to go about this.

Sorry for this. I am new guy for ffmpeg development. What way should be

better? I can refine it.


Secondly, if the frames need to change size, or surface format, then
this absolutely needs to be called, doesn't it?

Based on VP9 spec, the frame resolution can be changed per frame. But
current

frame will need refer to previous frame still. So if destroy the VAAPI
context, it

will cause reference frame surface in VAAPI driver lost.

In fact, this patch is for the issue:

https://github.com/intel/media-driver/issues/629

its 2nd frame (128x128) will refer to the 1st frame (256x256).


This may work if the frame size decreases, but what if it increases?
Then the frame buffers in the pool are too small, and anything could
go wrong.
This won't be an easy issue to solve, and needs very careful design.


Agree. I should add [RFC] for this patch.

I can investigate frame buffer management of ffmpeg and submit new patch 
for covering this situation.


Thanks for comments.

Yan Wang



- Hendrik
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/2] libavfilter/dnn: add script to convert TensorFlow model (.pb) to native model (.model)

2019-05-28 Thread Liu Steven


> 在 2019年5月28日,下午4:01,Guo, Yejun  写道:
> 
> For example, given TensorFlow model file espcn.pb,
> to generate native model file espcn.model, just run:
> python convert.py espcn.pb
> 
> In current implementation, the native model file is generated for
> specific dnn network with hard-code python scripts maintained out of ffmpeg.
> For example, srcnn network used by vf_sr is generated with
> https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_and_model.py#L85
> 
> In this patch, the script is designed as a general solution which
> converts general TensorFlow model .pb file into .model file. The script
> now has some tricky to be compatible with current implemention, will
> be refined step by step.
> 
> The script is also added into ffmpeg source tree. It is expected there
> will be many more patches and community needs the ownership of it.
> 
> Another technical direction is to do the conversion in c/c++ code within
> ffmpeg source tree. While .pb file is organized with protocol buffers,
> it is not easy to do such work with tiny c/c++ code, see more discussion
> at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
> choose the python script.
> 
> Signed-off-by: Guo, Yejun 
> ---
> libavfilter/dnn/python/convert.py |  52 ++
> libavfilter/dnn/python/convert_from_tensorflow.py | 200 ++
What about move them into ./tools/ ?

> 2 files changed, 252 insertions(+)
> create mode 100644 libavfilter/dnn/python/convert.py
> create mode 100644 libavfilter/dnn/python/convert_from_tensorflow.py
> 
> diff --git a/libavfilter/dnn/python/convert.py 
> b/libavfilter/dnn/python/convert.py
> new file mode 100644
> index 000..662b429
> --- /dev/null
> +++ b/libavfilter/dnn/python/convert.py
> @@ -0,0 +1,52 @@
> +# Copyright (c) 2019 Guo Yejun
> +#
> +# This file is part of FFmpeg.
> +#
> +# FFmpeg is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU Lesser General Public
> +# License as published by the Free Software Foundation; either
> +# version 2.1 of the License, or (at your option) any later version.
> +#
> +# FFmpeg is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +# Lesser General Public License for more details.
> +#
> +# You should have received a copy of the GNU Lesser General Public
> +# License along with FFmpeg; if not, write to the Free Software
> +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 
> USA
> +# 
> ==
> +
> +# verified with Python 3.5.2 on Ubuntu 16.04
> +import argparse
> +import os
> +from convert_from_tensorflow import *
> +
> +def get_arguments():
> +parser = argparse.ArgumentParser(description='generate native mode model 
> with weights from deep learning model')
> +parser.add_argument('--outdir', type=str, default='./', help='where to 
> put generated files')
> +parser.add_argument('--infmt', type=str, default='tensorflow', 
> help='format of the deep learning model')
> +parser.add_argument('infile', help='path to the deep learning model with 
> weights')
> +
> +return parser.parse_args()
> +
> +def main():
> +args = get_arguments()
> +
> +if not os.path.isfile(args.infile):
> +print('the specified input file %s does not exist' % args.infile)
> +exit(1)
> +
> +if not os.path.exists(args.outdir):
> +print('create output directory %s' % args.outdir)
> +os.mkdir(args.outdir)
> +
> +basefile = os.path.split(args.infile)[1]
> +basefile = os.path.splitext(basefile)[0]
> +outfile = os.path.join(args.outdir, basefile) + '.model'
> +
> +if args.infmt == 'tensorflow':
> +convert_from_tensorflow(args.infile, outfile)
> +
> +if __name__ == '__main__':
> +main()
> diff --git a/libavfilter/dnn/python/convert_from_tensorflow.py 
> b/libavfilter/dnn/python/convert_from_tensorflow.py
> new file mode 100644
> index 000..436ec0e
> --- /dev/null
> +++ b/libavfilter/dnn/python/convert_from_tensorflow.py
> @@ -0,0 +1,200 @@
> +# Copyright (c) 2019 Guo Yejun
> +#
> +# This file is part of FFmpeg.
> +#
> +# FFmpeg is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU Lesser General Public
> +# License as published by the Free Software Foundation; either
> +# version 2.1 of the License, or (at your option) any later version.
> +#
> +# FFmpeg is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +# Lesser General Public License for more details.
> +#
> +# You should have received a copy of the GNU Lesser General Public
> +# License along with FFmpeg; if not, write to the Free Software
> +# Foundatio

[FFmpeg-devel] [PATCH] libavformat/qsvenc: repeat mpeg2 missing headers [v2]

2019-05-28 Thread Andreas Håkon
Hi,

This patch supersedes #13105 (https://patchwork.ffmpeg.org/patch/13105/)

A new (simpler and more robust) implementationof of the reinsertion of the
missing headers in the MPEG-2 bitstream from the HW QSV encoder.

The problem is quite simple: The bitstream generated by the MPEG-2 QSV
encoder only incorporates the SEQ_START_CODE and EXT_START_CODE
headers in the first GOP. This generates a result that is not suitable for
streaming or broadcasting.
With this patch the "mpeg2_qsv" encoder is at the same level as the
"mpeg2video", as the software implementation repeats these headers by default
in each GOP.

Regards.
A.H.

---From 8a139ea03b0445e3057122577126f3a48744fe29 Mon Sep 17 00:00:00 2001
From: Andreas Hakon 
Date: Tue, 28 May 2019 11:27:05 +0100
Subject: [PATCH] libavformat/qsvenc: repeat mpeg2 missing headers [v2]

The current implementation of the QSV MPEG-2 HW encoder writes the value
of MPEG-2 sequence headers in-band one time only. That is, in the first GOP
of the stream. This behavior generates a bitstream that is not suitable for
streaming or broadcasting.
This patch resolves this problem by storing the headers in the configuration 
phase, and reinserting them back if necessary into each GOP.

Signed-off-by: Andreas Hakon 
---
 libavcodec/qsvenc.c |   27 +++
 libavcodec/qsvenc.h |3 +++
 2 files changed, 30 insertions(+)

diff --git a/libavcodec/qsvenc.c b/libavcodec/qsvenc.c
index 8dbad71..63ee198 100644
--- a/libavcodec/qsvenc.c
+++ b/libavcodec/qsvenc.c
@@ -859,6 +859,15 @@ static int qsv_retrieve_enc_params(AVCodecContext *avctx, 
QSVEncContext *q)
 
 q->packet_size = q->param.mfx.BufferSizeInKB * 
q->param.mfx.BRCParamMultiplier * 1000;
 
+if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
+av_log(avctx, AV_LOG_DEBUG, "Reading MPEG-2 initial Sequence Sections 
(SPSBuffer:%d)\n", extradata.SPSBufSize);
+q->add_headers = av_malloc(extradata.SPSBufSize);
+if (!q->add_headers)
+return AVERROR(ENOMEM);
+q->add_headers_size = extradata.SPSBufSize;
+memcpy(q->add_headers, extradata.SPSBuffer, q->add_headers_size);
+}
+
 if (!extradata.SPSBufSize || (need_pps && !extradata.PPSBufSize)
 #if QSV_HAVE_CO_VPS
 || (q->hevc_vps && !extradata_vps.VPSBufSize)
@@ -999,6 +1008,7 @@ int ff_qsv_enc_init(AVCodecContext *avctx, QSVEncContext 
*q)
 int ret;
 
 q->param.AsyncDepth = q->async_depth;
+q->add_headers_size = 0;
 
 q->async_fifo = av_fifo_alloc(q->async_depth * qsv_fifo_item_size());
 if (!q->async_fifo)
@@ -1437,6 +1447,20 @@ int ff_qsv_encode(AVCodecContext *avctx, QSVEncContext 
*q,
 ret = MFXVideoCORE_SyncOperation(q->session, *sync, 1000);
 } while (ret == MFX_WRN_IN_EXECUTION);
 
+if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO && q->add_headers_size > 
0 &&
+(bs->FrameType & MFX_FRAMETYPE_I || bs->FrameType & 
MFX_FRAMETYPE_IDR || bs->FrameType & MFX_FRAMETYPE_xI || bs->FrameType & 
MFX_FRAMETYPE_xIDR)) {
+if (bs->Data[0] == 0x00 && bs->Data[1] == 0x00 && bs->Data[2] 
== 0x01 && bs->Data[3] != 0xb3 ) {
+av_log(avctx, AV_LOG_DEBUG, "Missing MPEG-2 Sequence 
Sections, reinsertion required\n");
+if (q->add_headers_size + bs->DataLength <= 
q->packet_size) {
+memmove(new_pkt.data + q->add_headers_size, 
new_pkt.data, bs->DataLength);
+memcpy(new_pkt.data, q->add_headers, 
q->add_headers_size);
+bs->DataLength += q->add_headers_size;
+} else {
+av_log(avctx, AV_LOG_WARNING, "Insufficient spacing to 
reinsert MPEG-2 Sequence Sections");
+}
+}
+}
+
 new_pkt.dts  = av_rescale_q(bs->DecodeTimeStamp, (AVRational){1, 
9}, avctx->time_base);
 new_pkt.pts  = av_rescale_q(bs->TimeStamp,   (AVRational){1, 
9}, avctx->time_base);
 new_pkt.size = bs->DataLength;
@@ -1545,5 +1569,8 @@ int ff_qsv_enc_close(AVCodecContext *avctx, QSVEncContext 
*q)
 
 av_freep(&q->extparam);
 
+if (q->add_headers_size > 0)
+av_freep(&q->add_headers);
+
 return 0;
 }
diff --git a/libavcodec/qsvenc.h b/libavcodec/qsvenc.h
index f2f4d38..ee81c51 100644
--- a/libavcodec/qsvenc.h
+++ b/libavcodec/qsvenc.h
@@ -177,6 +177,9 @@ typedef struct QSVEncContext {
 int low_power;
 int gpb;
 
+int add_headers_size;
+uint8_t *add_headers;
+
 int a53_cc;
 
 #if QSV_HAVE_MF
-- 
1.7.10.4

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavformat/qsvenc: fix mpeg2 missing headers

2019-05-28 Thread Andreas Håkon
Hi Andreas (Rheinhardt) and Reimar,

Thank you for your comments. You're right and the code needs
refactoring.
I prepared a new implementation, please review it:
https://patchwork.ffmpeg.org/patch/13316/

In any case, please note that the software encoder "mpeg2video"
every time writes the SEQ_START_CODE and EXT_START_CODE in
each GOP. So this isn't and optional functionality.

Futhermore, at least as far as I know, it's impossible to
reinsert these missing headers with the dump_extra bitstream filter.

Regards.
A.H.

---

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Andreas Håkon
Hi,

‐‐‐ Original Message ‐‐‐
On Wednesday, 22 de May de 2019 17:04, Andreas Håkon 
 wrote:

> Patch to generate aligned Teletext PES packets using the MPEG-TS muxer
> when the TS header contains data.

Can someone please tell me who is the official maintainer of the MPEG-TS muxer?
I'm still waiting for comments regarding this patch.
We've been using it for a long time without problems.

Regards.
A.H.

---
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Andreas Rheinhardt
Andreas Håkon:
> Hi,
> 
> ‐‐‐ Original Message ‐‐‐
> On Wednesday, 22 de May de 2019 17:04, Andreas Håkon 
>  wrote:
> 
>> Patch to generate aligned Teletext PES packets using the MPEG-TS muxer
>> when the TS header contains data.
> 
> Can someone please tell me who is the official maintainer of the MPEG-TS 
> muxer?
> I'm still waiting for comments regarding this patch.
> We've been using it for a long time without problems.
Just take a look into the MAINTAINERS file:

mpegts.c  Marton Balint
mpegtsenc.c   Baptiste Coudurier

- Andreas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 1/2] libavfilter/dnn: add script to convert TensorFlow model (.pb) to native model (.model)

2019-05-28 Thread Guo, Yejun


> -Original Message-
> From: ffmpeg-devel [mailto:ffmpeg-devel-boun...@ffmpeg.org] On Behalf Of
> Liu Steven
> Sent: Tuesday, May 28, 2019 6:00 PM
> To: FFmpeg development discussions and patches 
> Cc: Liu Steven ; Guo, Yejun 
> Subject: Re: [FFmpeg-devel] [PATCH 1/2] libavfilter/dnn: add script to convert
> TensorFlow model (.pb) to native model (.model)
> 
> 
> 
> > 在 2019年5月28日,下午4:01,Guo, Yejun  写
> 道:
> >
> > For example, given TensorFlow model file espcn.pb,
> > to generate native model file espcn.model, just run:
> > python convert.py espcn.pb
> >
> > In current implementation, the native model file is generated for
> > specific dnn network with hard-code python scripts maintained out of ffmpeg.
> > For example, srcnn network used by vf_sr is generated with
> >
> https://github.com/HighVoltageRocknRoll/sr/blob/master/generate_header_a
> nd_model.py#L85
> >
> > In this patch, the script is designed as a general solution which
> > converts general TensorFlow model .pb file into .model file. The script
> > now has some tricky to be compatible with current implemention, will
> > be refined step by step.
> >
> > The script is also added into ffmpeg source tree. It is expected there
> > will be many more patches and community needs the ownership of it.
> >
> > Another technical direction is to do the conversion in c/c++ code within
> > ffmpeg source tree. While .pb file is organized with protocol buffers,
> > it is not easy to do such work with tiny c/c++ code, see more discussion
> > at http://ffmpeg.org/pipermail/ffmpeg-devel/2019-May/244496.html. So,
> > choose the python script.
> >
> > Signed-off-by: Guo, Yejun 
> > ---
> > libavfilter/dnn/python/convert.py |  52 ++
> > libavfilter/dnn/python/convert_from_tensorflow.py | 200
> ++
> What about move them into ./tools/ ?

yes, this is another feasible option. My idea is to put all the dnn stuffs 
together,
other dnn .h/.c files will be at libavfilter/dnn/

> 
> > 2 files changed, 252 insertions(+)
> > create mode 100644 libavfilter/dnn/python/convert.py
> > create mode 100644 libavfilter/dnn/python/convert_from_tensorflow.py
> >
> > diff --git a/libavfilter/dnn/python/convert.py
> b/libavfilter/dnn/python/convert.py
> > new file mode 100644
> > index 000..662b429
> > --- /dev/null
> > +++ b/libavfilter/dnn/python/convert.py
> > @@ -0,0 +1,52 @@
> > +# Copyright (c) 2019 Guo Yejun
> > +#
> > +# This file is part of FFmpeg.
> > +#
> > +# FFmpeg is free software; you can redistribute it and/or
> > +# modify it under the terms of the GNU Lesser General Public
> > +# License as published by the Free Software Foundation; either
> > +# version 2.1 of the License, or (at your option) any later version.
> > +#
> > +# FFmpeg is distributed in the hope that it will be useful,
> > +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> GNU
> > +# Lesser General Public License for more details.
> > +#
> > +# You should have received a copy of the GNU Lesser General Public
> > +# License along with FFmpeg; if not, write to the Free Software
> > +# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> > +#
> 
> ==
> > +
> > +# verified with Python 3.5.2 on Ubuntu 16.04
> > +import argparse
> > +import os
> > +from convert_from_tensorflow import *
> > +
> > +def get_arguments():
> > +parser = argparse.ArgumentParser(description='generate native mode
> model with weights from deep learning model')
> > +parser.add_argument('--outdir', type=str, default='./', help='where to
> put generated files')
> > +parser.add_argument('--infmt', type=str, default='tensorflow',
> help='format of the deep learning model')
> > +parser.add_argument('infile', help='path to the deep learning model
> with weights')
> > +
> > +return parser.parse_args()
> > +
> > +def main():
> > +args = get_arguments()
> > +
> > +if not os.path.isfile(args.infile):
> > +print('the specified input file %s does not exist' % args.infile)
> > +exit(1)
> > +
> > +if not os.path.exists(args.outdir):
> > +print('create output directory %s' % args.outdir)
> > +os.mkdir(args.outdir)
> > +
> > +basefile = os.path.split(args.infile)[1]
> > +basefile = os.path.splitext(basefile)[0]
> > +outfile = os.path.join(args.outdir, basefile) + '.model'
> > +
> > +if args.infmt == 'tensorflow':
> > +convert_from_tensorflow(args.infile, outfile)
> > +
> > +if __name__ == '__main__':
> > +main()
> > diff --git a/libavfilter/dnn/python/convert_from_tensorflow.py
> b/libavfilter/dnn/python/convert_from_tensorflow.py
> > new file mode 100644
> > index 000..436ec0e
> > --- /dev/null
> > +++ b/libavfilter/dnn/python/convert_from_tensorflow.py
> > @@ -0,0 +1,200 @@
> > +# Copyright (c) 2019 Guo Yejun
> > +#
> 

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Andreas Håkon
Hi A. Rheinhardt,


‐‐‐ Original Message ‐‐‐
On Tuesday, 28 de May de 2019 13:24, Andreas Rheinhardt 
 wrote:

> > Can someone please tell me who is the official maintainer of the MPEG-TS 
> > muxer?
> > I'm still waiting for comments regarding this patch.
> > We've been using it for a long time without problems.
>
> Just take a look into the MAINTAINERS file:
>
> mpegts.c Marton Balint
> mpegtsenc.c Baptiste Coudurier

I do that! More than two weeks ago I send a private message to B. Coudurier 
(free.fr) mail.
However, I think that address doesn't work.

I hope someone can review and merge this patch.
Sorry for disturbing you.

Regards.
A.H.

---

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Andreas Rheinhardt
Andreas Håkon:
> Hi A. Rheinhardt,
> 
> 
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, 28 de May de 2019 13:24, Andreas Rheinhardt 
>  wrote:
> 
>>> Can someone please tell me who is the official maintainer of the MPEG-TS 
>>> muxer?
>>> I'm still waiting for comments regarding this patch.
>>> We've been using it for a long time without problems.
>>
>> Just take a look into the MAINTAINERS file:
>>
>> mpegts.c Marton Balint
>> mpegtsenc.c Baptiste Coudurier
> 
> I do that! More than two weeks ago I send a private message to B. Coudurier 
> (free.fr) mail.
> However, I think that address doesn't work.
His emails to this list are sent from baptiste.coudur...@gmail.com.

- Andreas

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH v3 1/2] lavf/vf_transpose: add exif orientation support

2019-05-28 Thread Paul B Mahol
On 5/28/19, Jun Li  wrote:
> Add exif orientation support and expose an option.
> ---
>  libavfilter/hflip.h|   3 +
>  libavfilter/vf_hflip.c |  43 ++---
>  libavfilter/vf_transpose.c | 173 +
>  3 files changed, 171 insertions(+), 48 deletions(-)
>
> diff --git a/libavfilter/hflip.h b/libavfilter/hflip.h
> index 204090dbb4..3d837f01b0 100644
> --- a/libavfilter/hflip.h
> +++ b/libavfilter/hflip.h
> @@ -35,5 +35,8 @@ typedef struct FlipContext {
>
>  int ff_hflip_init(FlipContext *s, int step[4], int nb_planes);
>  void ff_hflip_init_x86(FlipContext *s, int step[4], int nb_planes);
> +int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink);
> +int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int
> job, int nb_jobs, int vlifp);
> +
>
>  #endif /* AVFILTER_HFLIP_H */
> diff --git a/libavfilter/vf_hflip.c b/libavfilter/vf_hflip.c
> index b77afc77fc..84fd8975b1 100644
> --- a/libavfilter/vf_hflip.c
> +++ b/libavfilter/vf_hflip.c
> @@ -37,6 +37,7 @@
>  #include "libavutil/intreadwrite.h"
>  #include "libavutil/imgutils.h"
>
> +
>  static const AVOption hflip_options[] = {
>  { NULL }
>  };
> @@ -125,9 +126,8 @@ static void hflip_qword_c(const uint8_t *ssrc, uint8_t
> *ddst, int w)
>  dst[j] = src[-j];
>  }
>
> -static int config_props(AVFilterLink *inlink)
> +int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink)
>  {
> -FlipContext *s = inlink->dst->priv;
>  const AVPixFmtDescriptor *pix_desc =
> av_pix_fmt_desc_get(inlink->format);
>  const int hsub = pix_desc->log2_chroma_w;
>  const int vsub = pix_desc->log2_chroma_h;
> @@ -140,10 +140,15 @@ static int config_props(AVFilterLink *inlink)
>  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h,
> vsub);
>
>  nb_planes = av_pix_fmt_count_planes(inlink->format);
> -
>  return ff_hflip_init(s, s->max_step, nb_planes);
>  }
>
> +static int config_props(AVFilterLink *inlink)
> +{
> +FlipContext *s = inlink->dst->priv;
> +return ff_hflip_config_props(s, inlink);
> +}
> +
>  int ff_hflip_init(FlipContext *s, int step[4], int nb_planes)
>  {
>  int i;
> @@ -170,14 +175,10 @@ typedef struct ThreadData {
>  AVFrame *in, *out;
>  } ThreadData;
>
> -static int filter_slice(AVFilterContext *ctx, void *arg, int job, int
> nb_jobs)
> +int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int
> job, int nb_jobs, int vflip)
>  {
> -FlipContext *s = ctx->priv;
> -ThreadData *td = arg;
> -AVFrame *in = td->in;
> -AVFrame *out = td->out;
>  uint8_t *inrow, *outrow;
> -int i, plane, step;
> +int i, plane, step, outlinesize;
>
>  for (plane = 0; plane < 4 && in->data[plane] && in->linesize[plane];
> plane++) {
>  const int width  = s->planewidth[plane];
> @@ -187,19 +188,35 @@ static int filter_slice(AVFilterContext *ctx, void
> *arg, int job, int nb_jobs)
>
>  step = s->max_step[plane];
>
> -outrow = out->data[plane] + start * out->linesize[plane];
> -inrow  = in ->data[plane] + start * in->linesize[plane] + (width -
> 1) * step;
> +if (vflip) {
> +outrow = out->data[plane] + (height - start - 1)*
> out->linesize[plane];
> +outlinesize = -out->linesize[plane];
> +} else {
> +outrow = out->data[plane] + start * out->linesize[plane];
> +outlinesize = out->linesize[plane];
> +}
> +
> +inrow = in->data[plane] + start * in->linesize[plane] +  (width -
> 1) * step;
> +
>  for (i = start; i < end; i++) {
>  s->flip_line[plane](inrow, outrow, width);
>
>  inrow  += in ->linesize[plane];
> -outrow += out->linesize[plane];
> +outrow += outlinesize;
>  }
>  }
> -
>  return 0;
>  }
>
> +static int filter_slice(AVFilterContext *ctx, void *arg, int job, int
> nb_jobs)
> +{
> +FlipContext *s = ctx->priv;
> +ThreadData *td = arg;
> +AVFrame *in = td->in;
> +AVFrame *out = td->out;
> +return ff_hflip_filter_slice(s, in, out, job, nb_jobs, 0);
> +}
> +
>  static int filter_frame(AVFilterLink *inlink, AVFrame *in)
>  {
>  AVFilterContext *ctx  = inlink->dst;
> diff --git a/libavfilter/vf_transpose.c b/libavfilter/vf_transpose.c
> index dd54947bd9..9cc07a1e6f 100644
> --- a/libavfilter/vf_transpose.c
> +++ b/libavfilter/vf_transpose.c
> @@ -39,6 +39,7 @@
>  #include "internal.h"
>  #include "video.h"
>  #include "transpose.h"
> +#include "hflip.h"
>
>  typedef struct TransVtable {
>  void (*transpose_8x8)(uint8_t *src, ptrdiff_t src_linesize,
> @@ -48,16 +49,24 @@ typedef struct TransVtable {
>  int w, int h);
>  } TransVtable;
>
> -typedef struct TransContext {
> -const AVClass *class;
> +typedef struct TransContextData {
>  int hsub, vsub;
>  int planes;
>  int pixsteps[4];
> +TransVtable vtables[4];
> +} TransContextData;
>
> +typedef struct 

Re: [FFmpeg-devel] [PATCH v3 2/2] fftools/ffmpeg: add exif orientation support per frame's metadata

2019-05-28 Thread Paul B Mahol
On 5/28/19, Jun Li  wrote:
> Fix #6945
> Rotate or/and flip frame according to frame's metadata orientation
> ---
>  fftools/ffmpeg.c| 16 +++-
>  fftools/ffmpeg.h|  3 ++-
>  fftools/ffmpeg_filter.c | 28 +++-
>  3 files changed, 40 insertions(+), 7 deletions(-)
>
> diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
> index 01f04103cf..2f4229a9d0 100644
> --- a/fftools/ffmpeg.c
> +++ b/fftools/ffmpeg.c
> @@ -2126,6 +2126,19 @@ static int ifilter_has_all_input_formats(FilterGraph
> *fg)
>  return 1;
>  }
>
> +static int orientation_need_update(InputFilter *ifilter, AVFrame *frame)
> +{
> +int orientaion = get_frame_orientation(frame);
> +int filterst = ifilter->orientation <= 1 ? 0 : // not set
> +   ifilter->orientation <= 4 ? 1 : // auto flip
> +   2; // auto transpose
> +int framest = orientaion <= 1 ? 0 : // not set
> +  orientaion <= 4 ? 1 : // auto flip
> +  2; // auto transpose
> +
> +return filterst != framest;
> +}
> +
>  static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
>  {
>  FilterGraph *fg = ifilter->graph;
> @@ -2142,7 +2155,8 @@ static int ifilter_send_frame(InputFilter *ifilter,
> AVFrame *frame)
>  break;
>  case AVMEDIA_TYPE_VIDEO:
>  need_reinit |= ifilter->width  != frame->width ||
> -   ifilter->height != frame->height;
> +   ifilter->height != frame->height ||
> +   orientation_need_update(ifilter, frame);
>  break;
>  }
>
> diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
> index eb1eaf6363..54532ef0eb 100644
> --- a/fftools/ffmpeg.h
> +++ b/fftools/ffmpeg.h
> @@ -244,7 +244,7 @@ typedef struct InputFilter {
>  // parameters configured for this input
>  int format;
>
> -int width, height;
> +int width, height, orientation;
>  AVRational sample_aspect_ratio;
>
>  int sample_rate;
> @@ -649,6 +649,7 @@ int init_complex_filtergraph(FilterGraph *fg);
>  void sub2video_update(InputStream *ist, AVSubtitle *sub);
>
>  int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame
> *frame);
> +int get_frame_orientation(const AVFrame* frame);
>
>  int ffmpeg_parse_options(int argc, char **argv);
>
> diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
> index 72838de1e2..ff63540906 100644
> --- a/fftools/ffmpeg_filter.c
> +++ b/fftools/ffmpeg_filter.c
> @@ -743,6 +743,18 @@ static int sub2video_prepare(InputStream *ist,
> InputFilter *ifilter)
>  return 0;
>  }
>
> +int get_frame_orientation(const AVFrame *frame)
> +{
> +AVDictionaryEntry *entry = NULL;
> +int orientation = 0;
> +
> +// read exif orientation data
> +entry = av_dict_get(frame->metadata, "Orientation", NULL, 0);
> +if (entry)
> +orientation = atoi(entry->value);
> +return orientation;
> +}
> +
>  static int configure_input_video_filter(FilterGraph *fg, InputFilter
> *ifilter,
>  AVFilterInOut *in)
>  {
> @@ -809,13 +821,18 @@ static int configure_input_video_filter(FilterGraph
> *fg, InputFilter *ifilter,
>  if (ist->autorotate) {
>  double theta = get_rotation(ist->st);
>
> -if (fabs(theta - 90) < 1.0) {
> +if (fabs(theta) < 1.0) { // no rotation info in stream meta
> +if (ifilter->orientation < 0 || ifilter->orientation > 8) {
> +av_log(NULL, AV_LOG_ERROR, "Invalid frame orientation:
> %i\n", ifilter->orientation);
> +} else if (ifilter->orientation > 1 && ifilter->orientation <=
> 4) { // skip 0 (not set) and 1 (orientaion 'Normal')
> +ret = insert_filter(&last_filter, &pad_idx, "transpose",
> "orientation=-1");
> +} else if (ifilter->orientation > 4) {
> +ret = insert_filter(&last_filter, &pad_idx, "transpose",
> "orientation=-2");

Usning non-named option values is bad and not user-friendly.

> +}
> +} else if (fabs(theta - 90) < 1.0) {
>  ret = insert_filter(&last_filter, &pad_idx, "transpose",
> "clock");
>  } else if (fabs(theta - 180) < 1.0) {
> -ret = insert_filter(&last_filter, &pad_idx, "hflip", NULL);
> -if (ret < 0)
> -return ret;
> -ret = insert_filter(&last_filter, &pad_idx, "vflip", NULL);
> +ret = insert_filter(&last_filter, &pad_idx, "transpose",
> "orientation=3");

ditto

>  } else if (fabs(theta - 270) < 1.0) {
>  ret = insert_filter(&last_filter, &pad_idx, "transpose",
> "cclock");
>  } else if (fabs(theta) > 1.0) {
> @@ -1191,6 +1208,7 @@ int ifilter_parameters_from_frame(InputFilter
> *ifilter, const AVFrame *frame)
>  ifilter->width   = frame->width;
>  ifilter->height  = frame->height;
>  ifilter->sample_aspect_ratio = frame->sample_aspect_ratio;
> +ifilter->orienta

Re: [FFmpeg-devel] [PATCH 1/2] avfilter: add audio soft clip filter

2019-05-28 Thread Werner Robitza
On Fri, Apr 19, 2019 at 3:21 PM Paul B Mahol  wrote:

> +@item param
> +Set additional parameter which controls sigmoid function.

Could you perhaps add some more info about the range of possible
values and their meaning?

Thanks,
Werner
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Michael Niedermayer
On Tue, May 28, 2019 at 12:08:55PM +, Andreas Håkon wrote:
> Hi A. Rheinhardt,
> 
> 
> ‐‐‐ Original Message ‐‐‐
> On Tuesday, 28 de May de 2019 13:24, Andreas Rheinhardt 
>  wrote:
> 
> > > Can someone please tell me who is the official maintainer of the MPEG-TS 
> > > muxer?
> > > I'm still waiting for comments regarding this patch.
> > > We've been using it for a long time without problems.
> >
> > Just take a look into the MAINTAINERS file:
> >
> > mpegts.c Marton Balint
> > mpegtsenc.c Baptiste Coudurier
> 
> I do that! More than two weeks ago I send a private message to B. Coudurier 
> (free.fr) mail.
> However, I think that address doesn't work.
> 
> I hope someone can review and merge this patch.
> Sorry for disturbing you.

with which software has the generated mpeg-ts been tested ?
do you have any testecases ?

thanks

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Complexity theory is the science of finding the exact solution to an
approximation. Benchmarking OTOH is finding an approximation of the exact


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread Derek Buitenhuis
Uses the crav1e C bindings for rav1e.

Missing version bump and changelog entry.

Signed-off-by: Derek Buitenhuis 
---
Hoping to get some eyes on this, and maybe help some people who want to
test out rav1e without having to use Y4Ms or pipes.

Some points:
* The C bindings for rav1e currently live in the crav1e repo. This will
  be merged into rav1e's repo Soon(TM), when stuff stabalizes.
* I have been told that I could ditch the frame queue if I used the 'new'
  send_frame / recieve_packet API. I tried this and ran into odd issues
  like it trying to flush too often (mid stream, etc.). I may have bungled
  it, but it didn't seem to work, and no non-hw encoder uses this API yet.
  Since libaomenc also has this sot of queue sytem, I decided to play it
  safe.
---
 configure  |   4 +
 doc/encoders.texi  |  29 +++
 doc/general.texi   |   7 +
 libavcodec/Makefile|   1 +
 libavcodec/allcodecs.c |   1 +
 libavcodec/librav1e.c  | 451 +
 6 files changed, 493 insertions(+)
 create mode 100644 libavcodec/librav1e.c

diff --git a/configure b/configure
index 32fc26356c..e61644b012 100755
--- a/configure
+++ b/configure
@@ -254,6 +254,7 @@ External library support:
   --enable-libopenmpt  enable decoding tracked files via libopenmpt [no]
   --enable-libopus enable Opus de/encoding via libopus [no]
   --enable-libpulseenable Pulseaudio input via libpulse [no]
+  --enable-librav1eenable AV1 encoding via rav1e [no]
   --enable-librsvg enable SVG rasterization via librsvg [no]
   --enable-librubberband   enable rubberband needed for rubberband filter [no]
   --enable-librtmp enable RTMP[E] support via librtmp [no]
@@ -1778,6 +1779,7 @@ EXTERNAL_LIBRARY_LIST="
 libopenmpt
 libopus
 libpulse
+librav1e
 librsvg
 librtmp
 libshine
@@ -3173,6 +3175,7 @@ libopenmpt_demuxer_deps="libopenmpt"
 libopus_decoder_deps="libopus"
 libopus_encoder_deps="libopus"
 libopus_encoder_select="audio_frame_queue"
+librav1e_encoder_deps="librav1e"
 librsvg_decoder_deps="librsvg"
 libshine_encoder_deps="libshine"
 libshine_encoder_select="audio_frame_queue"
@@ -6199,6 +6202,7 @@ enabled libopus   && {
 }
 }
 enabled libpulse  && require_pkg_config libpulse libpulse 
pulse/pulseaudio.h pa_context_new
+enabled librav1e  && require_pkg_config librav1e rav1e rav1e.h 
rav1e_context_new
 enabled librsvg   && require_pkg_config librsvg librsvg-2.0 
librsvg-2.0/librsvg/rsvg.h rsvg_handle_render_cairo
 enabled librtmp   && require_pkg_config librtmp librtmp librtmp/rtmp.h 
RTMP_Socket
 enabled librubberband && require_pkg_config librubberband "rubberband >= 
1.8.1" rubberband/rubberband-c.h rubberband_new -lstdc++ && append 
librubberband_extralibs "-lstdc++"
diff --git a/doc/encoders.texi b/doc/encoders.texi
index eefd124751..2ed7053838 100644
--- a/doc/encoders.texi
+++ b/doc/encoders.texi
@@ -1378,6 +1378,35 @@ makes it possible to store non-rgb pix_fmts.
 
 @end table
 
+@section librav1e
+
+rav1e AV1 encoder wrapper.
+
+Requires the presence of the rav1e headers and library from crav1e
+during configuration. You need to explicitly configue the build with
+@code{--enable-librav1e}.
+
+@subsection Options
+
+@table @option
+@item max-quantizer
+Sets the maximum qauntizer (floor) to use when using bitrate mode.
+
+@item quantizer
+Uses quantizers mode to encode at the given quantizer.
+
+@item rav1e-params
+Set rav1e options using a list of @var{key}=@var{value} couples separated
+by ":". See @command{rav1e --help} for a list of options.
+
+For example to specify librav1e encoding options with @option{-rav1e-params}:
+
+@example
+ffmpeg -i input -c:v librav1e -rav1e-params speed=5:low_latency=true output.mp4
+@end example
+
+@end table
+
 @section libaom-av1
 
 libaom AV1 encoder wrapper.
diff --git a/doc/general.texi b/doc/general.texi
index ec437230e3..6a1cd11a22 100644
--- a/doc/general.texi
+++ b/doc/general.texi
@@ -243,6 +243,13 @@ FFmpeg can use the OpenJPEG libraries for 
decoding/encoding J2K videos.  Go to
 instructions.  To enable using OpenJPEG in FFmpeg, pass 
@code{--enable-libopenjpeg} to
 @file{./configure}.
 
+@section rav1e
+
+FFmpeg can make use of rav1e (Rust AV1 Encoder) via its C bindings to encode 
videos.
+Go to @url{https://github.com/lu-zero/crav1e/} and 
@url{https://github.com/xiph/rav1e/}
+and follow the instructions. To enable using rav1e in FFmpeg, pass 
@code{--enable-librav1e}
+to @file{./configure}.
+
 @section TwoLAME
 
 FFmpeg can make use of the TwoLAME library for MP2 encoding.
diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index edccd73037..dc589ce35d 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -988,6 +988,7 @@ OBJS-$(CONFIG_LIBOPUS_DECODER)+= libopusdec.o 
libopus.o \
  vorbis_data.o
 OBJS-$(CONFIG_LIBOPUS_ENCODER)

Re: [FFmpeg-devel] [PATCH] libavformat/mpegtsenc: adaptive alignment for teletext PES packets

2019-05-28 Thread Andreas Håkon
Hi Michael,

> with which software has the generated mpeg-ts been tested ?
> do you have any testecases ?

Our test cases are mainly DVB-C broadcasts and mobile streaming.
All clients play well. And internal testing is done using the
well-known DVB Inspector.

Regards.
A.H.

---

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] avcodec/tiff: Add support for recognizing DNG files

2019-05-28 Thread velocityra
From: Nick Renieris 

In DNG images with the .tiff extension, it solves the issue where the
TIFF thumbnail in IFD 0 was incorrectly parsed (related confusion: [1]).
Embedded thumbnails for DNG images can still be decoded with the added
"-dng_thumb" option.

Additionally:
 - Renamed TIFF_WHITE_LEVEL to DNG_WHITE_LEVEL since it is specified
   in the DNG spec.
 - Added/changed some comments to be more precise in differentiating
   between TIFF, TIFF/EP and DNG values.

Related to ticket: https://trac.ffmpeg.org/ticket/4364

---

[1]: 
https://superuser.com/questions/546879/creating-video-from-dng-images-with-ffmpeg

Signed-off-by: Nick Renieris 
---
 libavcodec/tiff.c  | 36 +++-
 libavcodec/tiff.h  | 18 +-
 libavformat/img2.c |  1 +
 3 files changed, 49 insertions(+), 6 deletions(-)

diff --git a/libavcodec/tiff.c b/libavcodec/tiff.c
index 79e6242549..9575fe92ce 100644
--- a/libavcodec/tiff.c
+++ b/libavcodec/tiff.c
@@ -56,6 +56,7 @@ typedef struct TiffContext {
 
 int get_subimage;
 uint16_t get_page;
+int get_dng_thumb;
 
 int width, height;
 unsigned int bpp, bppcount;
@@ -70,7 +71,9 @@ typedef struct TiffContext {
 int predictor;
 int fill_order;
 uint32_t res[4];
+int is_thumbnail;
 
+int dng_mode; /** denotes that this is a DNG image */
 int is_bayer;
 uint8_t pattern[4];
 unsigned white_level;
@@ -948,6 +951,8 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
 }
 
 switch (tag) {
+case TIFF_SUBFILE:
+s->is_thumbnail = (value != 0);
 case TIFF_WIDTH:
 s->width = value;
 break;
@@ -1088,7 +1093,7 @@ static int tiff_decode_tag(TiffContext *s, AVFrame *frame)
 case TIFF_SUB_IFDS:
 s->sub_ifd = value;
 break;
-case TIFF_WHITE_LEVEL:
+case DNG_WHITE_LEVEL:
 s->white_level = value;
 break;
 case TIFF_CFA_PATTERN_DIM:
@@ -1339,6 +1344,20 @@ static int tiff_decode_tag(TiffContext *s, AVFrame 
*frame)
 case TIFF_SOFTWARE_NAME:
 ADD_METADATA(count, "software", NULL);
 break;
+case DNG_VERSION:
+if (count == 4) {
+unsigned int ver[4];
+ver[0] = ff_tget(&s->gb, type, s->le);
+ver[1] = ff_tget(&s->gb, type, s->le);
+ver[2] = ff_tget(&s->gb, type, s->le);
+ver[3] = ff_tget(&s->gb, type, s->le);
+
+av_log(s->avctx, AV_LOG_DEBUG, "DNG file, version %u.%u.%u.%u\n",
+ver[0], ver[1], ver[2], ver[3]);
+
+s->dng_mode = 1;
+}
+break;
 default:
 if (s->avctx->err_recognition & AV_EF_EXPLODE) {
 av_log(s->avctx, AV_LOG_ERROR,
@@ -1387,6 +1406,7 @@ static int decode_frame(AVCodecContext *avctx,
 s->le  = le;
 // TIFF_BPP is not a required tag and defaults to 1
 again:
+s->is_thumbnail = 0;
 s->bppcount= s->bpp = 1;
 s->photometric = TIFF_PHOTOMETRIC_NONE;
 s->compr   = TIFF_RAW;
@@ -1394,6 +1414,7 @@ again:
 s->white_level = 0;
 s->is_bayer= 0;
 s->cur_page= 0;
+s->dng_mode= 0;
 free_geotags(s);
 
 // Reset these offsets so we can tell if they were set this frame
@@ -1408,6 +1429,18 @@ again:
 return ret;
 }
 
+if (s->dng_mode) {
+if (!s->get_dng_thumb) {
+av_log(avctx, AV_LOG_ERROR, "DNG images are not supported\n");
+av_log(avctx, AV_LOG_INFO, "You can use -dng_thumb to decode an 
embedded TIFF thumbnail (if any) instead\n");
+return AVERROR_INVALIDDATA;
+}
+if (!s->is_thumbnail) {
+av_log(avctx, AV_LOG_INFO, "No embedded thumbnail present\n");
+return AVERROR_EOF;
+}
+}
+
 /** whether we should look for this IFD's SubIFD */
 retry_for_subifd = s->sub_ifd && s->get_subimage;
 /** whether we should look for this multi-page IFD's next page */
@@ -1671,6 +1704,7 @@ static av_cold int tiff_end(AVCodecContext *avctx)
 static const AVOption tiff_options[] = {
 { "subimage", "decode subimage instead if available", 
OFFSET(get_subimage), AV_OPT_TYPE_BOOL, {.i64=0},  0, 1, 
AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
 { "page", "page number of multi-page image to decode (starting from 1)", 
OFFSET(get_page), AV_OPT_TYPE_INT, {.i64=0}, 0, UINT16_MAX, 
AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
+{ "dng_thumb", "decode TIFF thumbnail of DNG image", 
OFFSET(get_dng_thumb), AV_OPT_TYPE_BOOL, {.i64=0},  0, 1, 
AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
 { NULL },
 };
 
diff --git a/libavcodec/tiff.h b/libavcodec/tiff.h
index 4b08650108..3cede299f6 100644
--- a/libavcodec/tiff.h
+++ b/libavcodec/tiff.h
@@ -20,7 +20,7 @@
 
 /**
  * @file
- * TIFF tables
+ * TIFF constants & data structures
  *
  * For more information about the TIFF format, check the official docs at:
  * http://partners.adobe.com/public/developer/tiff/inde

[FFmpeg-devel] Is this a regression or not?

2019-05-28 Thread Nomis101 🐝
I regularly build HandBrake against the latest FFmpeg master to check if all 
works as expected or something is broken and I need to open a bug. With the 
latest master I found some issues, but I'm
unsure if this is a regression or not.
After commit f9271d0158122c9e043b7d83f691884b65ec1ba5 HB doesn't recognize the 
audio stream anymore, so the encoded file has no audio stream at all. Is this
1. A regression and I should report this to the FFmpeg bug tracker
or is it
2. A change in the way how streams are handled and all FFmpeg-depended 
applications will need to modify their code accordingly?

Thanks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] Is this a regression or not?

2019-05-28 Thread Marton Balint



On Tue, 28 May 2019, Nomis101 🐝 wrote:


I regularly build HandBrake against the latest FFmpeg master to check if all 
works as expected or something is broken and I need to open a bug. With the 
latest master I found some issues, but I'm
unsure if this is a regression or not.
After commit f9271d0158122c9e043b7d83f691884b65ec1ba5 HB doesn't recognize the 
audio stream anymore, so the encoded file has no audio stream at all. Is this
1. A regression and I should report this to the FFmpeg bug tracker
or is it
2. A change in the way how streams are handled and all FFmpeg-depended 
applications will need to modify their code accordingly?


From this description I don't know. Open a ticket in trac.ffmpeg.org and 
post the command line handbrake generates and the ffmpeg command output.


Thanks,
Marton
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread James Almer
On 5/28/2019 2:29 PM, Derek Buitenhuis wrote:
> Uses the crav1e C bindings for rav1e.
> 
> Missing version bump and changelog entry.
> 
> Signed-off-by: Derek Buitenhuis 
> ---
> Hoping to get some eyes on this, and maybe help some people who want to
> test out rav1e without having to use Y4Ms or pipes.
> 
> Some points:
> * The C bindings for rav1e currently live in the crav1e repo. This will
>   be merged into rav1e's repo Soon(TM), when stuff stabalizes.
> * I have been told that I could ditch the frame queue if I used the 'new'
>   send_frame / recieve_packet API. I tried this and ran into odd issues
>   like it trying to flush too often (mid stream, etc.). I may have bungled
>   it, but it didn't seem to work, and no non-hw encoder uses this API yet.
>   Since libaomenc also has this sot of queue sytem, I decided to play it
>   safe.
> ---
>  configure  |   4 +
>  doc/encoders.texi  |  29 +++
>  doc/general.texi   |   7 +
>  libavcodec/Makefile|   1 +
>  libavcodec/allcodecs.c |   1 +
>  libavcodec/librav1e.c  | 451 +
>  6 files changed, 493 insertions(+)
>  create mode 100644 libavcodec/librav1e.c
> 
> diff --git a/configure b/configure
> index 32fc26356c..e61644b012 100755
> --- a/configure
> +++ b/configure
> @@ -254,6 +254,7 @@ External library support:
>--enable-libopenmpt  enable decoding tracked files via libopenmpt [no]
>--enable-libopus enable Opus de/encoding via libopus [no]
>--enable-libpulseenable Pulseaudio input via libpulse [no]
> +  --enable-librav1eenable AV1 encoding via rav1e [no]
>--enable-librsvg enable SVG rasterization via librsvg [no]
>--enable-librubberband   enable rubberband needed for rubberband filter 
> [no]
>--enable-librtmp enable RTMP[E] support via librtmp [no]
> @@ -1778,6 +1779,7 @@ EXTERNAL_LIBRARY_LIST="
>  libopenmpt
>  libopus
>  libpulse
> +librav1e
>  librsvg
>  librtmp
>  libshine
> @@ -3173,6 +3175,7 @@ libopenmpt_demuxer_deps="libopenmpt"
>  libopus_decoder_deps="libopus"
>  libopus_encoder_deps="libopus"
>  libopus_encoder_select="audio_frame_queue"
> +librav1e_encoder_deps="librav1e"
>  librsvg_decoder_deps="librsvg"
>  libshine_encoder_deps="libshine"
>  libshine_encoder_select="audio_frame_queue"
> @@ -6199,6 +6202,7 @@ enabled libopus   && {
>  }
>  }
>  enabled libpulse  && require_pkg_config libpulse libpulse 
> pulse/pulseaudio.h pa_context_new
> +enabled librav1e  && require_pkg_config librav1e rav1e rav1e.h 
> rav1e_context_new
>  enabled librsvg   && require_pkg_config librsvg librsvg-2.0 
> librsvg-2.0/librsvg/rsvg.h rsvg_handle_render_cairo
>  enabled librtmp   && require_pkg_config librtmp librtmp 
> librtmp/rtmp.h RTMP_Socket
>  enabled librubberband && require_pkg_config librubberband "rubberband >= 
> 1.8.1" rubberband/rubberband-c.h rubberband_new -lstdc++ && append 
> librubberband_extralibs "-lstdc++"
> diff --git a/doc/encoders.texi b/doc/encoders.texi
> index eefd124751..2ed7053838 100644
> --- a/doc/encoders.texi
> +++ b/doc/encoders.texi
> @@ -1378,6 +1378,35 @@ makes it possible to store non-rgb pix_fmts.
>  
>  @end table
>  
> +@section librav1e
> +
> +rav1e AV1 encoder wrapper.
> +
> +Requires the presence of the rav1e headers and library from crav1e
> +during configuration. You need to explicitly configue the build with
> +@code{--enable-librav1e}.
> +
> +@subsection Options
> +
> +@table @option
> +@item max-quantizer
> +Sets the maximum qauntizer (floor) to use when using bitrate mode.
> +
> +@item quantizer
> +Uses quantizers mode to encode at the given quantizer.
> +
> +@item rav1e-params
> +Set rav1e options using a list of @var{key}=@var{value} couples separated
> +by ":". See @command{rav1e --help} for a list of options.
> +
> +For example to specify librav1e encoding options with @option{-rav1e-params}:
> +
> +@example
> +ffmpeg -i input -c:v librav1e -rav1e-params speed=5:low_latency=true 
> output.mp4
> +@end example
> +
> +@end table
> +
>  @section libaom-av1
>  
>  libaom AV1 encoder wrapper.
> diff --git a/doc/general.texi b/doc/general.texi
> index ec437230e3..6a1cd11a22 100644
> --- a/doc/general.texi
> +++ b/doc/general.texi
> @@ -243,6 +243,13 @@ FFmpeg can use the OpenJPEG libraries for 
> decoding/encoding J2K videos.  Go to
>  instructions.  To enable using OpenJPEG in FFmpeg, pass 
> @code{--enable-libopenjpeg} to
>  @file{./configure}.
>  
> +@section rav1e
> +
> +FFmpeg can make use of rav1e (Rust AV1 Encoder) via its C bindings to encode 
> videos.
> +Go to @url{https://github.com/lu-zero/crav1e/} and 
> @url{https://github.com/xiph/rav1e/}
> +and follow the instructions. To enable using rav1e in FFmpeg, pass 
> @code{--enable-librav1e}
> +to @file{./configure}.
> +
>  @section TwoLAME
>  
>  FFmpeg can make use of the TwoLAME library for MP2 encoding.
> diff --git a/

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread Derek Buitenhuis
On 28/05/2019 20:32, James Almer wrote:
>> +default:
>> +// This should be impossible
>> +return (RaChromaSampling) -1;
> 
> If it's not meant to happen, then it should probably be an av_assert0.

Yeah, that makes more sense. Will change.

>> +int rret;
>> +int ret = 0;
> 
> Why two ret variables?

Will fix.

>> +if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
>> + const AVBitStreamFilter *filter = 
>> av_bsf_get_by_name("extract_extradata");
> 
> See
> https://github.com/dwbuiten/FFmpeg/commit/e146749232928a1cebe637338189938644237894#r33622810
> 
> You don't need to extract the Sequence Header from encoded frames since
> rav1e provides a function for this purpose.
> So does libaom, for that matter, but it's buggy and that's why it's not
> being used.

As discussed on IRC, that function does not return the config OBUs at the end of
the ISOBMFF-style data, so this way is still preferred until it does.

I'll switch over if/once rav1e's API does return that.


>> +memcpy(pkt->data, rpkt->data, rpkt->len);
>> +
>> +if (rpkt->frame_type == RA_FRAME_TYPE_KEY)
>> +pkt->flags |= AV_PKT_FLAG_KEY;
>> +
>> +pkt->pts = pkt->dts = rpkt->number * avctx->ticks_per_frame;
>> +
>> +rav1e_packet_unref(rpkt);
> 
> You can avoid the data copy if you wrap the RaPacket into the
> AVBufferRef used in the AVPacket, but for this you need to use the
> send/receive API, given the encode2 one allows the caller to use their
> own allocated packet.

As discussed on IRC / noted in the top of this email, I'll see how
you fare switching to send/recieve. :)

>> +static const AVOption options[] = {
>> +{ "quantizer", "use constant quantizer mode", OFFSET(quantizer), 
>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 255, VE },
>> +{ "max-quantizer", "max quantizer when using bitrate mode", 
>> OFFSET(max_quantizer), AV_OPT_TYPE_INT, { .i64 = 255 }, 1, 255, VE },
> 
> This should be mapped to qmax lavc option instead.

OK. Is there a matching option for plain old quantizer?
I couldn't find an obvious one.

- Derk
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread James Almer
On 5/28/2019 4:49 PM, Derek Buitenhuis wrote:
> On 28/05/2019 20:32, James Almer wrote:
>>> +default:
>>> +// This should be impossible
>>> +return (RaChromaSampling) -1;
>>
>> If it's not meant to happen, then it should probably be an av_assert0.
> 
> Yeah, that makes more sense. Will change.
> 
>>> +int rret;
>>> +int ret = 0;
>>
>> Why two ret variables?
> 
> Will fix.
> 
>>> +if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
>>> + const AVBitStreamFilter *filter = 
>>> av_bsf_get_by_name("extract_extradata");
>>
>> See
>> https://github.com/dwbuiten/FFmpeg/commit/e146749232928a1cebe637338189938644237894#r33622810
>>
>> You don't need to extract the Sequence Header from encoded frames since
>> rav1e provides a function for this purpose.
>> So does libaom, for that matter, but it's buggy and that's why it's not
>> being used.
> 
> As discussed on IRC, that function does not return the config OBUs at the end 
> of
> the ISOBMFF-style data, so this way is still preferred until it does.
> 
> I'll switch over if/once rav1e's API does return that.
> 
> 
>>> +memcpy(pkt->data, rpkt->data, rpkt->len);
>>> +
>>> +if (rpkt->frame_type == RA_FRAME_TYPE_KEY)
>>> +pkt->flags |= AV_PKT_FLAG_KEY;
>>> +
>>> +pkt->pts = pkt->dts = rpkt->number * avctx->ticks_per_frame;
>>> +
>>> +rav1e_packet_unref(rpkt);
>>
>> You can avoid the data copy if you wrap the RaPacket into the
>> AVBufferRef used in the AVPacket, but for this you need to use the
>> send/receive API, given the encode2 one allows the caller to use their
>> own allocated packet.
> 
> As discussed on IRC / noted in the top of this email, I'll see how
> you fare switching to send/recieve. :)
> 
>>> +static const AVOption options[] = {
>>> +{ "quantizer", "use constant quantizer mode", OFFSET(quantizer), 
>>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 255, VE },
>>> +{ "max-quantizer", "max quantizer when using bitrate mode", 
>>> OFFSET(max_quantizer), AV_OPT_TYPE_INT, { .i64 = 255 }, 1, 255, VE },
>>
>> This should be mapped to qmax lavc option instead.
> 
> OK. Is there a matching option for plain old quantizer?
> I couldn't find an obvious one.

I think x26* and vpx/aom call it crf? It's not in option_tables.h in any
case.

> 
> - Derk
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
> 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread Derek Buitenhuis
On 28/05/2019 20:49, Derek Buitenhuis wrote:
>>> +static const AVOption options[] = {
>>> +{ "quantizer", "use constant quantizer mode", OFFSET(quantizer), 
>>> AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 255, VE },
>>> +{ "max-quantizer", "max quantizer when using bitrate mode", 
>>> OFFSET(max_quantizer), AV_OPT_TYPE_INT, { .i64 = 255 }, 1, 255, VE },
>> This should be mapped to qmax lavc option instead.
> OK. Is there a matching option for plain old quantizer?
> I couldn't find an obvious one.

Looking closer at this, it does not seem correct to use it.

As, per `ffmpeg -h full`:

  -qmax  E..V. maximum video quantizer scale 
(VBR) (from -1 to 1024) (default 31)

The default of 31 is really bad for AV1. Further, rav1e's CLI default of 255
is probably the safest to use here.

- Derek
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread Derek Buitenhuis
On 28/05/2019 20:58, James Almer wrote:
> I think x26* and vpx/aom call it crf? It's not in option_tables.h in any
> case.

They do not. This is a constant quantizer mode, not constant rate factor.

- Derek
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec/tiff: Add support for recognizing DNG files

2019-05-28 Thread Paul B Mahol
On 5/28/19, velocit...@gmail.com  wrote:
> From: Nick Renieris 
>
> In DNG images with the .tiff extension, it solves the issue where the
> TIFF thumbnail in IFD 0 was incorrectly parsed (related confusion: [1]).
> Embedded thumbnails for DNG images can still be decoded with the added
> "-dng_thumb" option.
>
> Additionally:
>  - Renamed TIFF_WHITE_LEVEL to DNG_WHITE_LEVEL since it is specified
>in the DNG spec.
>  - Added/changed some comments to be more precise in differentiating
>between TIFF, TIFF/EP and DNG values.
>
> Related to ticket: https://trac.ffmpeg.org/ticket/4364
>

This patch breaks decoding DNGs which are already supported.

> ---
>
> [1]:
> https://superuser.com/questions/546879/creating-video-from-dng-images-with-ffmpeg
>
> Signed-off-by: Nick Renieris 
> ---
>  libavcodec/tiff.c  | 36 +++-
>  libavcodec/tiff.h  | 18 +-
>  libavformat/img2.c |  1 +
>  3 files changed, 49 insertions(+), 6 deletions(-)
>
> diff --git a/libavcodec/tiff.c b/libavcodec/tiff.c
> index 79e6242549..9575fe92ce 100644
> --- a/libavcodec/tiff.c
> +++ b/libavcodec/tiff.c
> @@ -56,6 +56,7 @@ typedef struct TiffContext {
>
>  int get_subimage;
>  uint16_t get_page;
> +int get_dng_thumb;
>
>  int width, height;
>  unsigned int bpp, bppcount;
> @@ -70,7 +71,9 @@ typedef struct TiffContext {
>  int predictor;
>  int fill_order;
>  uint32_t res[4];
> +int is_thumbnail;
>
> +int dng_mode; /** denotes that this is a DNG image */
>  int is_bayer;
>  uint8_t pattern[4];
>  unsigned white_level;
> @@ -948,6 +951,8 @@ static int tiff_decode_tag(TiffContext *s, AVFrame
> *frame)
>  }
>
>  switch (tag) {
> +case TIFF_SUBFILE:
> +s->is_thumbnail = (value != 0);
>  case TIFF_WIDTH:
>  s->width = value;
>  break;
> @@ -1088,7 +1093,7 @@ static int tiff_decode_tag(TiffContext *s, AVFrame
> *frame)
>  case TIFF_SUB_IFDS:
>  s->sub_ifd = value;
>  break;
> -case TIFF_WHITE_LEVEL:
> +case DNG_WHITE_LEVEL:
>  s->white_level = value;
>  break;
>  case TIFF_CFA_PATTERN_DIM:
> @@ -1339,6 +1344,20 @@ static int tiff_decode_tag(TiffContext *s, AVFrame
> *frame)
>  case TIFF_SOFTWARE_NAME:
>  ADD_METADATA(count, "software", NULL);
>  break;
> +case DNG_VERSION:
> +if (count == 4) {
> +unsigned int ver[4];
> +ver[0] = ff_tget(&s->gb, type, s->le);
> +ver[1] = ff_tget(&s->gb, type, s->le);
> +ver[2] = ff_tget(&s->gb, type, s->le);
> +ver[3] = ff_tget(&s->gb, type, s->le);
> +
> +av_log(s->avctx, AV_LOG_DEBUG, "DNG file, version
> %u.%u.%u.%u\n",
> +ver[0], ver[1], ver[2], ver[3]);
> +
> +s->dng_mode = 1;
> +}
> +break;
>  default:
>  if (s->avctx->err_recognition & AV_EF_EXPLODE) {
>  av_log(s->avctx, AV_LOG_ERROR,
> @@ -1387,6 +1406,7 @@ static int decode_frame(AVCodecContext *avctx,
>  s->le  = le;
>  // TIFF_BPP is not a required tag and defaults to 1
>  again:
> +s->is_thumbnail = 0;
>  s->bppcount= s->bpp = 1;
>  s->photometric = TIFF_PHOTOMETRIC_NONE;
>  s->compr   = TIFF_RAW;
> @@ -1394,6 +1414,7 @@ again:
>  s->white_level = 0;
>  s->is_bayer= 0;
>  s->cur_page= 0;
> +s->dng_mode= 0;
>  free_geotags(s);
>
>  // Reset these offsets so we can tell if they were set this frame
> @@ -1408,6 +1429,18 @@ again:
>  return ret;
>  }
>
> +if (s->dng_mode) {
> +if (!s->get_dng_thumb) {
> +av_log(avctx, AV_LOG_ERROR, "DNG images are not supported\n");
> +av_log(avctx, AV_LOG_INFO, "You can use -dng_thumb to decode an
> embedded TIFF thumbnail (if any) instead\n");
> +return AVERROR_INVALIDDATA;
> +}
> +if (!s->is_thumbnail) {
> +av_log(avctx, AV_LOG_INFO, "No embedded thumbnail present\n");
> +return AVERROR_EOF;
> +}
> +}
> +
>  /** whether we should look for this IFD's SubIFD */
>  retry_for_subifd = s->sub_ifd && s->get_subimage;
>  /** whether we should look for this multi-page IFD's next page */
> @@ -1671,6 +1704,7 @@ static av_cold int tiff_end(AVCodecContext *avctx)
>  static const AVOption tiff_options[] = {
>  { "subimage", "decode subimage instead if available",
> OFFSET(get_subimage), AV_OPT_TYPE_BOOL, {.i64=0},  0, 1,
> AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
>  { "page", "page number of multi-page image to decode (starting from
> 1)", OFFSET(get_page), AV_OPT_TYPE_INT, {.i64=0}, 0, UINT16_MAX,
> AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
> +{ "dng_thumb", "decode TIFF thumbnail of DNG image",
> OFFSET(get_dng_thumb), AV_OPT_TYPE_BOOL, {.i64=0},  0, 1,
> AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_VIDEO_PARAM },
>  { NULL },
>  };
>
>

Re: [FFmpeg-devel] [PATCH v3 2/2] fftools/ffmpeg: add exif orientation support per frame's metadata

2019-05-28 Thread Michael Niedermayer
On Mon, May 27, 2019 at 11:18:26PM -0700, Jun Li wrote:
> Fix #6945
> Rotate or/and flip frame according to frame's metadata orientation
> ---
>  fftools/ffmpeg.c| 16 +++-
>  fftools/ffmpeg.h|  3 ++-
>  fftools/ffmpeg_filter.c | 28 +++-
>  3 files changed, 40 insertions(+), 7 deletions(-)
> 
> diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
> index 01f04103cf..2f4229a9d0 100644
> --- a/fftools/ffmpeg.c
> +++ b/fftools/ffmpeg.c
> @@ -2126,6 +2126,19 @@ static int ifilter_has_all_input_formats(FilterGraph 
> *fg)
>  return 1;
>  }
>  
> +static int orientation_need_update(InputFilter *ifilter, AVFrame *frame)
> +{
> +int orientaion = get_frame_orientation(frame);
> +int filterst = ifilter->orientation <= 1 ? 0 : // not set
> +   ifilter->orientation <= 4 ? 1 : // auto flip
> +   2; // auto transpose
> +int framest = orientaion <= 1 ? 0 : // not set
> +  orientaion <= 4 ? 1 : // auto flip
> +  2; // auto transpose
> +
> +return filterst != framest;
> +}
> +
>  static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
>  {
>  FilterGraph *fg = ifilter->graph;
> @@ -2142,7 +2155,8 @@ static int ifilter_send_frame(InputFilter *ifilter, 
> AVFrame *frame)
>  break;
>  case AVMEDIA_TYPE_VIDEO:
>  need_reinit |= ifilter->width  != frame->width ||
> -   ifilter->height != frame->height;
> +   ifilter->height != frame->height ||
> +   orientation_need_update(ifilter, frame);
>  break;
>  }
>  
> diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
> index eb1eaf6363..54532ef0eb 100644
> --- a/fftools/ffmpeg.h
> +++ b/fftools/ffmpeg.h
> @@ -244,7 +244,7 @@ typedef struct InputFilter {
>  // parameters configured for this input
>  int format;
>  
> -int width, height;
> +int width, height, orientation;
>  AVRational sample_aspect_ratio;
>  
>  int sample_rate;
> @@ -649,6 +649,7 @@ int init_complex_filtergraph(FilterGraph *fg);
>  void sub2video_update(InputStream *ist, AVSubtitle *sub);
>  
>  int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame 
> *frame);
> +int get_frame_orientation(const AVFrame* frame);
>  
>  int ffmpeg_parse_options(int argc, char **argv);
>  
> diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
> index 72838de1e2..ff63540906 100644
> --- a/fftools/ffmpeg_filter.c
> +++ b/fftools/ffmpeg_filter.c
> @@ -743,6 +743,18 @@ static int sub2video_prepare(InputStream *ist, 
> InputFilter *ifilter)
>  return 0;
>  }
>  
> +int get_frame_orientation(const AVFrame *frame)
> +{
> +AVDictionaryEntry *entry = NULL;
> +int orientation = 0;
> +
> +// read exif orientation data
> +entry = av_dict_get(frame->metadata, "Orientation", NULL, 0);

> +if (entry)
> +orientation = atoi(entry->value);

this probably should be checking the validity of the string unless
it has been checked already elsewhere

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

I have never wished to cater to the crowd; for what I know they do not
approve, and what they approve I do not know. -- Epicurus


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 06/11] cbs, cbs_mpeg2, cbs_jpeg: Don't av_freep local variables

2019-05-28 Thread Mark Thompson
On 22/05/2019 02:04, Andreas Rheinhardt wrote:
> There is no danger of leaving dangling pointers behind, as the lifespan
> of local variables (including pointers passed (by value) as function
> arguments) ends anyway as soon as we exit their scope.
> 
> Signed-off-by: Andreas Rheinhardt 
> ---
>  libavcodec/cbs.c   | 2 +-
>  libavcodec/cbs_jpeg.c  | 8 
>  libavcodec/cbs_mpeg2.c | 4 ++--
>  3 files changed, 7 insertions(+), 7 deletions(-)

I don't agree with the premise of this patch.  I think it's sensible to use 
av_freep() everywhere, because the marginal cost over av_free() is ~zero and it 
makes some possible use-after-free errors easier to find.  (While all of these 
do appear just before the containing functions return, in general it's not 
useful to even need to think about that.)

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/11] mpeg2_metadata, cbs_mpeg2: Fix handling of colour_description

2019-05-28 Thread Mark Thompson
On 22/05/2019 02:04, Andreas Rheinhardt wrote:
> If a sequence display extension is read with colour_description equal to
> zero, but a user wants to add one or more of the colour_description
> elements, then the colour_description elements the user did not explicitly
> request to be set are set to zero and not to the value equal to
> unknown/unspecified (namely 2). A value of zero is not only inappropriate,
> but explicitly forbidden. This is fixed by inferring the right default
> values during the reading process if the elements are absent; moreover,
> changing any of the colour_description elements to zero is now no longer
> permitted.
> 
> Furthermore, if a sequence display extension has to be added, the
> earlier code set some fields to their default value twice. This has been
> changed, too.
> 
> Signed-off-by: Andreas Rheinhardt 
> ---
>  libavcodec/cbs_mpeg2.c | 15 +++
>  libavcodec/cbs_mpeg2_syntax_template.c |  4 
>  libavcodec/mpeg2_metadata_bsf.c| 18 --
>  3 files changed, 31 insertions(+), 6 deletions(-)
> 
> diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c
> index 1d319e0947..437eac88a3 100644
> --- a/libavcodec/cbs_mpeg2.c
> +++ b/libavcodec/cbs_mpeg2.c
> @@ -71,6 +71,10 @@
>  (get_bits_left(rw) >= width && \
>   (var = show_bits(rw, width)) == (compare))
>  
> +#define infer(name, value) do { \
> +current->name = value; \
> +} while (0)
> +
>  #include "cbs_mpeg2_syntax_template.c"
>  
>  #undef READ
> @@ -79,6 +83,7 @@
>  #undef xui
>  #undef marker_bit
>  #undef nextbits
> +#undef infer
>  
>  
>  #define WRITE
> @@ -97,6 +102,15 @@
>  
>  #define nextbits(width, compare, var) (var)
>  
> +#define infer(name, value) do { \
> +if (current->name != (value)) { \
> +av_log(ctx->log_ctx, AV_LOG_WARNING, "Warning: " \
> +   "%s does not match inferred value: " \
> +   "%"PRId64", but should be %"PRId64".\n", \
> +   #name, (int64_t)current->name, (int64_t)(value)); \
> +} \
> +} while (0)
> +
>  #include "cbs_mpeg2_syntax_template.c"
>  
>  #undef READ
> @@ -105,6 +119,7 @@
>  #undef xui
>  #undef marker_bit
>  #undef nextbits
> +#undef infer
>  
>  
>  static void cbs_mpeg2_free_user_data(void *unit, uint8_t *content)
> diff --git a/libavcodec/cbs_mpeg2_syntax_template.c 
> b/libavcodec/cbs_mpeg2_syntax_template.c
> index b9d53682fe..87db0ad039 100644
> --- a/libavcodec/cbs_mpeg2_syntax_template.c
> +++ b/libavcodec/cbs_mpeg2_syntax_template.c
> @@ -144,6 +144,10 @@ static int 
> FUNC(sequence_display_extension)(CodedBitstreamContext *ctx, RWContex
>  uir(8, transfer_characteristics);
>  uir(8, matrix_coefficients);
>  #endif
> +} else {
> +infer(colour_primaries, 2);
> +infer(transfer_characteristics, 2);
> +infer(matrix_coefficients,  2);
>  }
>  
>  ui(14, display_horizontal_size);
> diff --git a/libavcodec/mpeg2_metadata_bsf.c b/libavcodec/mpeg2_metadata_bsf.c
> index ba3a74afda..5aed41a008 100644
> --- a/libavcodec/mpeg2_metadata_bsf.c
> +++ b/libavcodec/mpeg2_metadata_bsf.c
> @@ -147,18 +147,12 @@ static int mpeg2_metadata_update_fragment(AVBSFContext 
> *bsf,
>  
>  if (ctx->colour_primaries >= 0)
>  sde->colour_primaries = ctx->colour_primaries;
> -else if (add_sde)
> -sde->colour_primaries = 2;
>  
>  if (ctx->transfer_characteristics >= 0)
>  sde->transfer_characteristics = 
> ctx->transfer_characteristics;
> -else if (add_sde)
> -sde->transfer_characteristics = 2;
>  
>  if (ctx->matrix_coefficients >= 0)
>  sde->matrix_coefficients = ctx->matrix_coefficients;
> -else if (add_sde)
> -sde->matrix_coefficients = 2;
>  }
>  }
>  
> @@ -229,6 +223,18 @@ static int mpeg2_metadata_init(AVBSFContext *bsf)
>  CodedBitstreamFragment *frag = &ctx->fragment;
>  int err;
>  
> +#define VALIDITY_CHECK(name) do { \
> +if (!ctx->name) { \
> +av_log(bsf, AV_LOG_ERROR, "The value 0 for %s is " \
> +  "forbidden.\n", #name); \
> +return AVERROR(EINVAL); \
> +} \
> +} while (0)
> +VALIDITY_CHECK(colour_primaries);
> +VALIDITY_CHECK(transfer_characteristics);
> +VALIDITY_CHECK(matrix_coefficients);
> +#undef VALIDITY_CHECK

Perhaps we could use the normal option checking to enforce this?  Suppose we 
change the range from [-1, 255] to [0, 255] and make the do-nothing default 
value 0.  Then the checks above become ctx->colour_primaries > 0 and this 
separate test is not needed.

> +
>  err = ff_cbs_init(&ctx->cbc, AV_CODEC_ID_MPEG2VIDEO, bsf);
>  if (err < 0)
>  return err;
> 

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
htt

Re: [FFmpeg-devel] [PATCH 07/11] cbs: Remove useless initializations

2019-05-28 Thread Mark Thompson
On 22/05/2019 02:04, Andreas Rheinhardt wrote:
> Up until now, a temporary variable was used and initialized every time a
> value was read in CBS; if reading turned out to be successfull, this
> value was overwritten (without having ever been looked at) with the
> value read if reading was successfull; on failure the variable wasn't
> touched either. Therefore these initializations can be and have been
> removed.
> 
> Signed-off-by: Andreas Rheinhardt 
> ---
>  libavcodec/cbs_av1.c   | 14 +++---
>  libavcodec/cbs_h2645.c |  8 
>  libavcodec/cbs_jpeg.c  |  2 +-
>  libavcodec/cbs_mpeg2.c |  2 +-
>  libavcodec/cbs_vp9.c   |  8 
>  5 files changed, 17 insertions(+), 17 deletions(-)

IIRC this was from a broken warning in an older compiler (when you get a 
separate warning for every syntax element it's not particularly helpful to 
argue that the compiler is wrong).  I'll see if I can dig up the case this 
happened in, it probably doesn't apply everywhere and might be worth dropping 
anyway.

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 10/11] cbs_mpeg2: Fix parsing of picture headers

2019-05-28 Thread Mark Thompson
On 22/05/2019 02:04, Andreas Rheinhardt wrote:
> MPEG-2 picture and slice headers can contain optional extra information;
> both use the same syntax for their extra information. And cbs_mpeg2's
> implementations of both were buggy until recently; the one for the
> picture headers still is and this is fixed in this commit.
> 
> The extra information in picture headers has simply been forgotten.
> This meant that if this extra information was present, it was discarded
> during reading; and unfortunately writing created invalid bitstreams in
> this case (an extra_bit_picture - the last set bit of the whole unit -
> indicated that there would be a further byte of data, although the output
> didn't contain said data).
> 
> This has been fixed; both types of extra information are now
> parsed via the same code and essentially passed through.
> 
> Signed-off-by: Andreas Rheinhardt 
> ---
>  libavcodec/cbs_mpeg2.c | 31 +++-
>  libavcodec/cbs_mpeg2.h | 12 +++--
>  libavcodec/cbs_mpeg2_syntax_template.c | 66 +++---
>  3 files changed, 66 insertions(+), 43 deletions(-)
> 
> diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c
> index 97425aa706..2354f665cd 100644
> --- a/libavcodec/cbs_mpeg2.c
> +++ b/libavcodec/cbs_mpeg2.c
> @@ -41,18 +41,18 @@
>  #define SUBSCRIPTS(subs, ...) (subs > 0 ? ((int[subs + 1]){ subs, 
> __VA_ARGS__ }) : NULL)
>  
>  #define ui(width, name) \
> -xui(width, name, current->name, 0, MAX_UINT_BITS(width), 0)
> +xui(width, #name, current->name, 0, MAX_UINT_BITS(width), 0)
>  #define uir(width, name) \
> -xui(width, name, current->name, 1, MAX_UINT_BITS(width), 0)
> +xui(width, #name, current->name, 1, MAX_UINT_BITS(width), 0)
>  #define uis(width, name, subs, ...) \
> -xui(width, name, current->name, 0, MAX_UINT_BITS(width), subs, 
> __VA_ARGS__)
> +xui(width, #name, current->name, 0, MAX_UINT_BITS(width), subs, 
> __VA_ARGS__)
>  #define uirs(width, name, subs, ...) \
> -xui(width, name, current->name, 1, MAX_UINT_BITS(width), subs, 
> __VA_ARGS__)
> +xui(width, #name, current->name, 1, MAX_UINT_BITS(width), subs, 
> __VA_ARGS__)
>  #define sis(width, name, subs, ...) \
> -xsi(width, name, current->name, subs, __VA_ARGS__)
> +xsi(width, #name, current->name, subs, __VA_ARGS__)
>  
>  #define marker_bit() \
> -bit(marker_bit, 1)
> +bit("marker_bit", 1)
>  #define bit(name, value) do { \
>  av_unused uint32_t bit = value; \
>  xui(1, name, bit, value, value, 0); \
> @@ -65,7 +65,7 @@
>  
>  #define xui(width, name, var, range_min, range_max, subs, ...) do { \
>  uint32_t value; \
> -CHECK(ff_cbs_read_unsigned(ctx, rw, width, #name, \
> +CHECK(ff_cbs_read_unsigned(ctx, rw, width, name, \
> SUBSCRIPTS(subs, __VA_ARGS__), \
> &value, range_min, range_max)); \
>  var = value; \
> @@ -73,7 +73,7 @@
>  
>  #define xsi(width, name, var, subs, ...) do { \
>  int32_t value; \
> -CHECK(ff_cbs_read_signed(ctx, rw, width, #name, \
> +CHECK(ff_cbs_read_signed(ctx, rw, width, name, \
>   SUBSCRIPTS(subs, __VA_ARGS__), &value, \
>   MIN_INT_BITS(width), \
>   MAX_INT_BITS(width))); \
> @@ -104,13 +104,13 @@
>  #define RWContext PutBitContext
>  
>  #define xui(width, name, var, range_min, range_max, subs, ...) do { \
> -CHECK(ff_cbs_write_unsigned(ctx, rw, width, #name, \
> +CHECK(ff_cbs_write_unsigned(ctx, rw, width, name, \
>  SUBSCRIPTS(subs, __VA_ARGS__), \
>  var, range_min, range_max)); \
>  } while (0)
>  
>  #define xsi(width, name, var, subs, ...) do { \
> -CHECK(ff_cbs_write_signed(ctx, rw, width, #name, \
> +CHECK(ff_cbs_write_signed(ctx, rw, width, name, \
>SUBSCRIPTS(subs, __VA_ARGS__), var, \
>MIN_INT_BITS(width), \
>MAX_INT_BITS(width))); \

Calling the inner functions directly in extra_information feels like it would 
be cleaner?  This part makes the intermediate macros for mpeg2 act in a way 
which is subtly different to all the other codecs.

> @@ -138,6 +138,13 @@
>  #undef infer
>  
>  
> +static void cbs_mpeg2_free_picture_header(void *unit, uint8_t *content)
> +{
> +MPEG2RawPictureHeader *picture = (MPEG2RawPictureHeader*)content;
> +
> av_buffer_unref(&picture->extra_information_picture.extra_information_ref);
> +av_free(content);
> +}
> +
>  static void cbs_mpeg2_free_user_data(void *unit, uint8_t *content)
>  {
>  MPEG2RawUserData *user = (MPEG2RawUserData*)content;
> @@ -148,7 +155,7 @@ static void cbs_mpeg2_free_user_data(void *unit, uint8_t 
> *content)
>  s

Re: [FFmpeg-devel] [PATCH 00/11 v3] cbs (mostly MPEG-2) patches

2019-05-28 Thread Mark Thompson
On 22/05/2019 02:04, Andreas Rheinhardt wrote:
> 
> Andreas Rheinhardt (10):
>   cbs_mpeg2: Correct and use enum values
>   cbs_mpeg2: Improve checks for invalid values
>   cbs_mpeg2: Fix storage type for frame_centre_*_offset
>   cbs_mpeg2: Correct error codes
> 
> James Almer (1):
>   avcodec/cbs_mpeg2: fix leak of extra_information_slice buffer in
> cbs_mpeg2_read_slice_header()

LGTM for 1, 2, 4, 5, 8; applied.

Thanks!

- Mark
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] avcodec: Add librav1e encoder

2019-05-28 Thread James Darnley
On 2019-05-28 22:00, Derek Buitenhuis wrote:
> On 28/05/2019 20:58, James Almer wrote:
>> I think x26* and vpx/aom call it crf? It's not in option_tables.h in any
>> case.
> 
> They do not. This is a constant quantizer mode, not constant rate factor.

IIRC either qp or cqp




signature.asc
Description: OpenPGP digital signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH 03/11] mpeg2_metadata, cbs_mpeg2: Fix handling of colour_description

2019-05-28 Thread Andreas Rheinhardt
Mark Thompson:
> On 22/05/2019 02:04, Andreas Rheinhardt wrote:
>> If a sequence display extension is read with colour_description equal to
>> zero, but a user wants to add one or more of the colour_description
>> elements, then the colour_description elements the user did not explicitly
>> request to be set are set to zero and not to the value equal to
>> unknown/unspecified (namely 2). A value of zero is not only inappropriate,
>> but explicitly forbidden. This is fixed by inferring the right default
>> values during the reading process if the elements are absent; moreover,
>> changing any of the colour_description elements to zero is now no longer
>> permitted.
>>
>> Furthermore, if a sequence display extension has to be added, the
>> earlier code set some fields to their default value twice. This has been
>> changed, too.
>>
>> Signed-off-by: Andreas Rheinhardt 
>> ---
>>  libavcodec/cbs_mpeg2.c | 15 +++
>>  libavcodec/cbs_mpeg2_syntax_template.c |  4 
>>  libavcodec/mpeg2_metadata_bsf.c| 18 --
>>  3 files changed, 31 insertions(+), 6 deletions(-)
>>
>> diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c
>> index 1d319e0947..437eac88a3 100644
>> --- a/libavcodec/cbs_mpeg2.c
>> +++ b/libavcodec/cbs_mpeg2.c
>> @@ -71,6 +71,10 @@
>>  (get_bits_left(rw) >= width && \
>>   (var = show_bits(rw, width)) == (compare))
>>  
>> +#define infer(name, value) do { \
>> +current->name = value; \
>> +} while (0)
>> +
>>  #include "cbs_mpeg2_syntax_template.c"
>>  
>>  #undef READ
>> @@ -79,6 +83,7 @@
>>  #undef xui
>>  #undef marker_bit
>>  #undef nextbits
>> +#undef infer
>>  
>>  
>>  #define WRITE
>> @@ -97,6 +102,15 @@
>>  
>>  #define nextbits(width, compare, var) (var)
>>  
>> +#define infer(name, value) do { \
>> +if (current->name != (value)) { \
>> +av_log(ctx->log_ctx, AV_LOG_WARNING, "Warning: " \
>> +   "%s does not match inferred value: " \
>> +   "%"PRId64", but should be %"PRId64".\n", \
>> +   #name, (int64_t)current->name, (int64_t)(value)); \
>> +} \
>> +} while (0)
>> +
>>  #include "cbs_mpeg2_syntax_template.c"
>>  
>>  #undef READ
>> @@ -105,6 +119,7 @@
>>  #undef xui
>>  #undef marker_bit
>>  #undef nextbits
>> +#undef infer
>>  
>>  
>>  static void cbs_mpeg2_free_user_data(void *unit, uint8_t *content)
>> diff --git a/libavcodec/cbs_mpeg2_syntax_template.c 
>> b/libavcodec/cbs_mpeg2_syntax_template.c
>> index b9d53682fe..87db0ad039 100644
>> --- a/libavcodec/cbs_mpeg2_syntax_template.c
>> +++ b/libavcodec/cbs_mpeg2_syntax_template.c
>> @@ -144,6 +144,10 @@ static int 
>> FUNC(sequence_display_extension)(CodedBitstreamContext *ctx, RWContex
>>  uir(8, transfer_characteristics);
>>  uir(8, matrix_coefficients);
>>  #endif
>> +} else {
>> +infer(colour_primaries, 2);
>> +infer(transfer_characteristics, 2);
>> +infer(matrix_coefficients,  2);
>>  }
>>  
>>  ui(14, display_horizontal_size);
>> diff --git a/libavcodec/mpeg2_metadata_bsf.c 
>> b/libavcodec/mpeg2_metadata_bsf.c
>> index ba3a74afda..5aed41a008 100644
>> --- a/libavcodec/mpeg2_metadata_bsf.c
>> +++ b/libavcodec/mpeg2_metadata_bsf.c
>> @@ -147,18 +147,12 @@ static int mpeg2_metadata_update_fragment(AVBSFContext 
>> *bsf,
>>  
>>  if (ctx->colour_primaries >= 0)
>>  sde->colour_primaries = ctx->colour_primaries;
>> -else if (add_sde)
>> -sde->colour_primaries = 2;
>>  
>>  if (ctx->transfer_characteristics >= 0)
>>  sde->transfer_characteristics = 
>> ctx->transfer_characteristics;
>> -else if (add_sde)
>> -sde->transfer_characteristics = 2;
>>  
>>  if (ctx->matrix_coefficients >= 0)
>>  sde->matrix_coefficients = ctx->matrix_coefficients;
>> -else if (add_sde)
>> -sde->matrix_coefficients = 2;
>>  }
>>  }
>>  
>> @@ -229,6 +223,18 @@ static int mpeg2_metadata_init(AVBSFContext *bsf)
>>  CodedBitstreamFragment *frag = &ctx->fragment;
>>  int err;
>>  
>> +#define VALIDITY_CHECK(name) do { \
>> +if (!ctx->name) { \
>> +av_log(bsf, AV_LOG_ERROR, "The value 0 for %s is " \
>> +  "forbidden.\n", #name); \
>> +return AVERROR(EINVAL); \
>> +} \
>> +} while (0)
>> +VALIDITY_CHECK(colour_primaries);
>> +VALIDITY_CHECK(transfer_characteristics);
>> +VALIDITY_CHECK(matrix_coefficients);
>> +#undef VALIDITY_CHECK
> 
> Perhaps we could use the normal option checking to enforce this?  Suppose we 
> change the range from [-1, 255] to [0, 255] and make the do-nothing default 
> value 0.  Then the checks above become ctx->colour_primaries > 0 and this 
> separate test is not needed.
> 
There are two reasons why I didn't do this:
1. A user with only a back

[FFmpeg-devel] [PATCH v4 2/2] fftools/ffmpeg: add exif orientation support per frame's metadata

2019-05-28 Thread Jun Li
Fix #6945
Rotate or/and flip frame according to frame's metadata orientation
---
 fftools/ffmpeg.c| 16 +++-
 fftools/ffmpeg.h|  3 ++-
 fftools/ffmpeg_filter.c | 28 +++-
 3 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index 01f04103cf..2f4229a9d0 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -2126,6 +2126,19 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
 return 1;
 }
 
+static int orientation_need_update(InputFilter *ifilter, AVFrame *frame)
+{
+int orientaion = get_frame_orientation(frame);
+int filterst = ifilter->orientation <= 1 ? 0 : // not set
+   ifilter->orientation <= 4 ? 1 : // auto flip
+   2; // auto transpose
+int framest = orientaion <= 1 ? 0 : // not set
+  orientaion <= 4 ? 1 : // auto flip
+  2; // auto transpose
+
+return filterst != framest;
+}
+
 static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
 {
 FilterGraph *fg = ifilter->graph;
@@ -2142,7 +2155,8 @@ static int ifilter_send_frame(InputFilter *ifilter, 
AVFrame *frame)
 break;
 case AVMEDIA_TYPE_VIDEO:
 need_reinit |= ifilter->width  != frame->width ||
-   ifilter->height != frame->height;
+   ifilter->height != frame->height ||
+   orientation_need_update(ifilter, frame);
 break;
 }
 
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index eb1eaf6363..54532ef0eb 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -244,7 +244,7 @@ typedef struct InputFilter {
 // parameters configured for this input
 int format;
 
-int width, height;
+int width, height, orientation;
 AVRational sample_aspect_ratio;
 
 int sample_rate;
@@ -649,6 +649,7 @@ int init_complex_filtergraph(FilterGraph *fg);
 void sub2video_update(InputStream *ist, AVSubtitle *sub);
 
 int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame);
+int get_frame_orientation(const AVFrame* frame);
 
 int ffmpeg_parse_options(int argc, char **argv);
 
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 72838de1e2..1fcadb1871 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -743,6 +743,18 @@ static int sub2video_prepare(InputStream *ist, InputFilter 
*ifilter)
 return 0;
 }
 
+int get_frame_orientation(const AVFrame *frame)
+{
+AVDictionaryEntry *entry = NULL;
+int orientation = 0;
+
+// read exif orientation data
+entry = av_dict_get(frame->metadata, "Orientation", NULL, 0);
+if (entry && entry->value)
+orientation = atoi(entry->value);
+return orientation;
+}
+
 static int configure_input_video_filter(FilterGraph *fg, InputFilter *ifilter,
 AVFilterInOut *in)
 {
@@ -809,13 +821,18 @@ static int configure_input_video_filter(FilterGraph *fg, 
InputFilter *ifilter,
 if (ist->autorotate) {
 double theta = get_rotation(ist->st);
 
-if (fabs(theta - 90) < 1.0) {
+if (fabs(theta) < 1.0) { // no rotation info in stream meta
+if (ifilter->orientation < 0 || ifilter->orientation > 8) {
+av_log(NULL, AV_LOG_ERROR, "Invalid frame orientation: %i\n", 
ifilter->orientation);
+} else if (ifilter->orientation > 1 && ifilter->orientation <= 4) 
{ // skip 0 (not set) and 1 (orientaion 'Normal') 
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=auto_flip");
+} else if (ifilter->orientation > 4) {
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=auto_transpose");
+}
+} else if (fabs(theta - 90) < 1.0) {
 ret = insert_filter(&last_filter, &pad_idx, "transpose", "clock");
 } else if (fabs(theta - 180) < 1.0) {
-ret = insert_filter(&last_filter, &pad_idx, "hflip", NULL);
-if (ret < 0)
-return ret;
-ret = insert_filter(&last_filter, &pad_idx, "vflip", NULL);
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=rotate180");
 } else if (fabs(theta - 270) < 1.0) {
 ret = insert_filter(&last_filter, &pad_idx, "transpose", "cclock");
 } else if (fabs(theta) > 1.0) {
@@ -1191,6 +1208,7 @@ int ifilter_parameters_from_frame(InputFilter *ifilter, 
const AVFrame *frame)
 ifilter->width   = frame->width;
 ifilter->height  = frame->height;
 ifilter->sample_aspect_ratio = frame->sample_aspect_ratio;
+ifilter->orientation = get_frame_orientation(frame);
 
 ifilter->sample_rate = frame->sample_rate;
 ifilter->channels= frame->channels;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg

Re: [FFmpeg-devel] [PATCH 10/11] cbs_mpeg2: Fix parsing of picture headers

2019-05-28 Thread Andreas Rheinhardt
Mark Thompson:
> On 22/05/2019 02:04, Andreas Rheinhardt wrote:
>> MPEG-2 picture and slice headers can contain optional extra information;
>> both use the same syntax for their extra information. And cbs_mpeg2's
>> implementations of both were buggy until recently; the one for the
>> picture headers still is and this is fixed in this commit.
>>
>> The extra information in picture headers has simply been forgotten.
>> This meant that if this extra information was present, it was discarded
>> during reading; and unfortunately writing created invalid bitstreams in
>> this case (an extra_bit_picture - the last set bit of the whole unit -
>> indicated that there would be a further byte of data, although the output
>> didn't contain said data).
>>
>> This has been fixed; both types of extra information are now
>> parsed via the same code and essentially passed through.
>>
>> Signed-off-by: Andreas Rheinhardt 
>> ---
>>  libavcodec/cbs_mpeg2.c | 31 +++-
>>  libavcodec/cbs_mpeg2.h | 12 +++--
>>  libavcodec/cbs_mpeg2_syntax_template.c | 66 +++---
>>  3 files changed, 66 insertions(+), 43 deletions(-)
>>
>> diff --git a/libavcodec/cbs_mpeg2.c b/libavcodec/cbs_mpeg2.c
>> index 97425aa706..2354f665cd 100644
>> --- a/libavcodec/cbs_mpeg2.c
>> +++ b/libavcodec/cbs_mpeg2.c
>> @@ -41,18 +41,18 @@
>>  #define SUBSCRIPTS(subs, ...) (subs > 0 ? ((int[subs + 1]){ subs, 
>> __VA_ARGS__ }) : NULL)
>>  
>>  #define ui(width, name) \
>> -xui(width, name, current->name, 0, MAX_UINT_BITS(width), 0)
>> +xui(width, #name, current->name, 0, MAX_UINT_BITS(width), 0)
>>  #define uir(width, name) \
>> -xui(width, name, current->name, 1, MAX_UINT_BITS(width), 0)
>> +xui(width, #name, current->name, 1, MAX_UINT_BITS(width), 0)
>>  #define uis(width, name, subs, ...) \
>> -xui(width, name, current->name, 0, MAX_UINT_BITS(width), subs, 
>> __VA_ARGS__)
>> +xui(width, #name, current->name, 0, MAX_UINT_BITS(width), subs, 
>> __VA_ARGS__)
>>  #define uirs(width, name, subs, ...) \
>> -xui(width, name, current->name, 1, MAX_UINT_BITS(width), subs, 
>> __VA_ARGS__)
>> +xui(width, #name, current->name, 1, MAX_UINT_BITS(width), subs, 
>> __VA_ARGS__)
>>  #define sis(width, name, subs, ...) \
>> -xsi(width, name, current->name, subs, __VA_ARGS__)
>> +xsi(width, #name, current->name, subs, __VA_ARGS__)
>>  
>>  #define marker_bit() \
>> -bit(marker_bit, 1)
>> +bit("marker_bit", 1)
>>  #define bit(name, value) do { \
>>  av_unused uint32_t bit = value; \
>>  xui(1, name, bit, value, value, 0); \
>> @@ -65,7 +65,7 @@
>>  
>>  #define xui(width, name, var, range_min, range_max, subs, ...) do { \
>>  uint32_t value; \
>> -CHECK(ff_cbs_read_unsigned(ctx, rw, width, #name, \
>> +CHECK(ff_cbs_read_unsigned(ctx, rw, width, name, \
>> SUBSCRIPTS(subs, __VA_ARGS__), \
>> &value, range_min, range_max)); \
>>  var = value; \
>> @@ -73,7 +73,7 @@
>>  
>>  #define xsi(width, name, var, subs, ...) do { \
>>  int32_t value; \
>> -CHECK(ff_cbs_read_signed(ctx, rw, width, #name, \
>> +CHECK(ff_cbs_read_signed(ctx, rw, width, name, \
>>   SUBSCRIPTS(subs, __VA_ARGS__), &value, \
>>   MIN_INT_BITS(width), \
>>   MAX_INT_BITS(width))); \
>> @@ -104,13 +104,13 @@
>>  #define RWContext PutBitContext
>>  
>>  #define xui(width, name, var, range_min, range_max, subs, ...) do { \
>> -CHECK(ff_cbs_write_unsigned(ctx, rw, width, #name, \
>> +CHECK(ff_cbs_write_unsigned(ctx, rw, width, name, \
>>  SUBSCRIPTS(subs, __VA_ARGS__), \
>>  var, range_min, range_max)); \
>>  } while (0)
>>  
>>  #define xsi(width, name, var, subs, ...) do { \
>> -CHECK(ff_cbs_write_signed(ctx, rw, width, #name, \
>> +CHECK(ff_cbs_write_signed(ctx, rw, width, name, \
>>SUBSCRIPTS(subs, __VA_ARGS__), var, \
>>MIN_INT_BITS(width), \
>>MAX_INT_BITS(width))); \
> 
> Calling the inner functions directly in extra_information feels like it would 
> be cleaner?  This part makes the intermediate macros for mpeg2 act in a way 
> which is subtly different to all the other codecs.
> 
Agreed. The rationale I did it the way I did is of course that there
turned out to be exactly one call to xui in the mpeg2-syntax-template.
Or maybe one should add a new macro that is the macro actually calling
the inner functions directly and gets used by xui? If we hadn't used
the 's' for subscripts, xuis would be the obvious choice for this.
>> @@ -138,6 +138,13 @@
>>  #undef infer
>>  
>>  
>> +static void cbs_mpeg2_free_picture_h

[FFmpeg-devel] [PATCH v4 1/2] lavf/vf_transpose: add exif orientation support

2019-05-28 Thread Jun Li
Add exif orientation support and expose an option.
---
 libavfilter/hflip.h|   2 +
 libavfilter/transpose.h|  14 
 libavfilter/vf_hflip.c |  41 ++---
 libavfilter/vf_transpose.c | 166 -
 4 files changed, 189 insertions(+), 34 deletions(-)

diff --git a/libavfilter/hflip.h b/libavfilter/hflip.h
index 204090dbb4..4e89bae3fc 100644
--- a/libavfilter/hflip.h
+++ b/libavfilter/hflip.h
@@ -35,5 +35,7 @@ typedef struct FlipContext {
 
 int ff_hflip_init(FlipContext *s, int step[4], int nb_planes);
 void ff_hflip_init_x86(FlipContext *s, int step[4], int nb_planes);
+int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink);
+int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int job, 
int nb_jobs, int vlifp);
 
 #endif /* AVFILTER_HFLIP_H */
diff --git a/libavfilter/transpose.h b/libavfilter/transpose.h
index aa262b9487..5da08bddc0 100644
--- a/libavfilter/transpose.h
+++ b/libavfilter/transpose.h
@@ -34,4 +34,18 @@ enum TransposeDir {
 TRANSPOSE_VFLIP,
 };
 
+enum OrientationType {
+ORIENTATION_AUTO_TRANSPOSE = -2,
+ORIENTATION_AUTO_FLIP = -1,
+ORIENTATION_NONE = 0,
+ORIENTATION_NORMAL,
+ORIENTATION_HFLIP,
+ORIENTATION_ROTATE180,
+ORIENTATION_VFLIP,
+ORIENTATION_HFLIP_ROTATE270CW,
+ORIENTATION_ROTATE90CW,
+ORIENTATION_HFLIP_ROTATE90CW,
+ORIENTATION_ROTATE270CW
+};
+
 #endif
diff --git a/libavfilter/vf_hflip.c b/libavfilter/vf_hflip.c
index b77afc77fc..97161000fd 100644
--- a/libavfilter/vf_hflip.c
+++ b/libavfilter/vf_hflip.c
@@ -37,6 +37,7 @@
 #include "libavutil/intreadwrite.h"
 #include "libavutil/imgutils.h"
 
+
 static const AVOption hflip_options[] = {
 { NULL }
 };
@@ -125,9 +126,8 @@ static void hflip_qword_c(const uint8_t *ssrc, uint8_t 
*ddst, int w)
 dst[j] = src[-j];
 }
 
-static int config_props(AVFilterLink *inlink)
+int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink)
 {
-FlipContext *s = inlink->dst->priv;
 const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(inlink->format);
 const int hsub = pix_desc->log2_chroma_w;
 const int vsub = pix_desc->log2_chroma_h;
@@ -144,6 +144,12 @@ static int config_props(AVFilterLink *inlink)
 return ff_hflip_init(s, s->max_step, nb_planes);
 }
 
+static int config_props(AVFilterLink *inlink)
+{
+FlipContext *s = inlink->dst->priv;
+return ff_hflip_config_props(s, inlink);
+}
+
 int ff_hflip_init(FlipContext *s, int step[4], int nb_planes)
 {
 int i;
@@ -170,14 +176,10 @@ typedef struct ThreadData {
 AVFrame *in, *out;
 } ThreadData;
 
-static int filter_slice(AVFilterContext *ctx, void *arg, int job, int nb_jobs)
+int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int job, 
int nb_jobs, int vflip)
 {
-FlipContext *s = ctx->priv;
-ThreadData *td = arg;
-AVFrame *in = td->in;
-AVFrame *out = td->out;
 uint8_t *inrow, *outrow;
-int i, plane, step;
+int i, plane, step, outlinesize;
 
 for (plane = 0; plane < 4 && in->data[plane] && in->linesize[plane]; 
plane++) {
 const int width  = s->planewidth[plane];
@@ -187,19 +189,36 @@ static int filter_slice(AVFilterContext *ctx, void *arg, 
int job, int nb_jobs)
 
 step = s->max_step[plane];
 
-outrow = out->data[plane] + start * out->linesize[plane];
-inrow  = in ->data[plane] + start * in->linesize[plane] + (width - 1) 
* step;
+if (vflip) {
+outrow = out->data[plane] + (height - start - 1)* 
out->linesize[plane];
+outlinesize = -out->linesize[plane];
+} else {
+outrow = out->data[plane] + start * out->linesize[plane];
+outlinesize = out->linesize[plane];
+}
+
+inrow = in->data[plane] + start * in->linesize[plane] +  (width - 1) * 
step;
+
 for (i = start; i < end; i++) {
 s->flip_line[plane](inrow, outrow, width);
 
 inrow  += in ->linesize[plane];
-outrow += out->linesize[plane];
+outrow += outlinesize;
 }
 }
 
 return 0;
 }
 
+static int filter_slice(AVFilterContext *ctx, void *arg, int job, int nb_jobs)
+{
+FlipContext *s = ctx->priv;
+ThreadData *td = arg;
+AVFrame *in = td->in;
+AVFrame *out = td->out;
+return ff_hflip_filter_slice(s, in, out, job, nb_jobs, 0);
+}
+
 static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 {
 AVFilterContext *ctx  = inlink->dst;
diff --git a/libavfilter/vf_transpose.c b/libavfilter/vf_transpose.c
index dd54947bd9..d3e6a8eea1 100644
--- a/libavfilter/vf_transpose.c
+++ b/libavfilter/vf_transpose.c
@@ -39,6 +39,7 @@
 #include "internal.h"
 #include "video.h"
 #include "transpose.h"
+#include "hflip.h"
 
 typedef struct TransVtable {
 void (*transpose_8x8)(uint8_t *src, ptrdiff_t src_linesize,
@@ -48,16 +49,24 @@ typedef struct TransVtable {
 int w, int h);
 } TransVtable;
 
-typedef struct TransContext {

[FFmpeg-devel] [PATCH v5 2/2] fftools/ffmpeg: add exif orientation support per frame's metadata

2019-05-28 Thread Jun Li
Fix #6945
Rotate or/and flip frame according to frame's metadata orientation
---
 fftools/ffmpeg.c| 16 +++-
 fftools/ffmpeg.h|  3 ++-
 fftools/ffmpeg_filter.c | 28 +++-
 3 files changed, 40 insertions(+), 7 deletions(-)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index 01f04103cf..2f4229a9d0 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -2126,6 +2126,19 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
 return 1;
 }
 
+static int orientation_need_update(InputFilter *ifilter, AVFrame *frame)
+{
+int orientaion = get_frame_orientation(frame);
+int filterst = ifilter->orientation <= 1 ? 0 : // not set
+   ifilter->orientation <= 4 ? 1 : // auto flip
+   2; // auto transpose
+int framest = orientaion <= 1 ? 0 : // not set
+  orientaion <= 4 ? 1 : // auto flip
+  2; // auto transpose
+
+return filterst != framest;
+}
+
 static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
 {
 FilterGraph *fg = ifilter->graph;
@@ -2142,7 +2155,8 @@ static int ifilter_send_frame(InputFilter *ifilter, 
AVFrame *frame)
 break;
 case AVMEDIA_TYPE_VIDEO:
 need_reinit |= ifilter->width  != frame->width ||
-   ifilter->height != frame->height;
+   ifilter->height != frame->height ||
+   orientation_need_update(ifilter, frame);
 break;
 }
 
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index eb1eaf6363..54532ef0eb 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -244,7 +244,7 @@ typedef struct InputFilter {
 // parameters configured for this input
 int format;
 
-int width, height;
+int width, height, orientation;
 AVRational sample_aspect_ratio;
 
 int sample_rate;
@@ -649,6 +649,7 @@ int init_complex_filtergraph(FilterGraph *fg);
 void sub2video_update(InputStream *ist, AVSubtitle *sub);
 
 int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame);
+int get_frame_orientation(const AVFrame* frame);
 
 int ffmpeg_parse_options(int argc, char **argv);
 
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 72838de1e2..1fcadb1871 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -743,6 +743,18 @@ static int sub2video_prepare(InputStream *ist, InputFilter 
*ifilter)
 return 0;
 }
 
+int get_frame_orientation(const AVFrame *frame)
+{
+AVDictionaryEntry *entry = NULL;
+int orientation = 0;
+
+// read exif orientation data
+entry = av_dict_get(frame->metadata, "Orientation", NULL, 0);
+if (entry && entry->value)
+orientation = atoi(entry->value);
+return orientation;
+}
+
 static int configure_input_video_filter(FilterGraph *fg, InputFilter *ifilter,
 AVFilterInOut *in)
 {
@@ -809,13 +821,18 @@ static int configure_input_video_filter(FilterGraph *fg, 
InputFilter *ifilter,
 if (ist->autorotate) {
 double theta = get_rotation(ist->st);
 
-if (fabs(theta - 90) < 1.0) {
+if (fabs(theta) < 1.0) { // no rotation info in stream meta
+if (ifilter->orientation < 0 || ifilter->orientation > 8) {
+av_log(NULL, AV_LOG_ERROR, "Invalid frame orientation: %i\n", 
ifilter->orientation);
+} else if (ifilter->orientation > 1 && ifilter->orientation <= 4) 
{ // skip 0 (not set) and 1 (orientaion 'Normal') 
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=auto_flip");
+} else if (ifilter->orientation > 4) {
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=auto_transpose");
+}
+} else if (fabs(theta - 90) < 1.0) {
 ret = insert_filter(&last_filter, &pad_idx, "transpose", "clock");
 } else if (fabs(theta - 180) < 1.0) {
-ret = insert_filter(&last_filter, &pad_idx, "hflip", NULL);
-if (ret < 0)
-return ret;
-ret = insert_filter(&last_filter, &pad_idx, "vflip", NULL);
+ret = insert_filter(&last_filter, &pad_idx, "transpose", 
"orientation=rotate180");
 } else if (fabs(theta - 270) < 1.0) {
 ret = insert_filter(&last_filter, &pad_idx, "transpose", "cclock");
 } else if (fabs(theta) > 1.0) {
@@ -1191,6 +1208,7 @@ int ifilter_parameters_from_frame(InputFilter *ifilter, 
const AVFrame *frame)
 ifilter->width   = frame->width;
 ifilter->height  = frame->height;
 ifilter->sample_aspect_ratio = frame->sample_aspect_ratio;
+ifilter->orientation = get_frame_orientation(frame);
 
 ifilter->sample_rate = frame->sample_rate;
 ifilter->channels= frame->channels;
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg

[FFmpeg-devel] [PATCH v5 1/2] lavf/vf_transpose: add exif orientation support

2019-05-28 Thread Jun Li
Add exif orientation support and expose an option.
---
 libavfilter/hflip.h|   2 +
 libavfilter/transpose.h|  14 
 libavfilter/vf_hflip.c |  41 ++---
 libavfilter/vf_transpose.c | 166 -
 4 files changed, 189 insertions(+), 34 deletions(-)

diff --git a/libavfilter/hflip.h b/libavfilter/hflip.h
index 204090dbb4..4e89bae3fc 100644
--- a/libavfilter/hflip.h
+++ b/libavfilter/hflip.h
@@ -35,5 +35,7 @@ typedef struct FlipContext {
 
 int ff_hflip_init(FlipContext *s, int step[4], int nb_planes);
 void ff_hflip_init_x86(FlipContext *s, int step[4], int nb_planes);
+int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink);
+int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int job, 
int nb_jobs, int vlifp);
 
 #endif /* AVFILTER_HFLIP_H */
diff --git a/libavfilter/transpose.h b/libavfilter/transpose.h
index aa262b9487..5da08bddc0 100644
--- a/libavfilter/transpose.h
+++ b/libavfilter/transpose.h
@@ -34,4 +34,18 @@ enum TransposeDir {
 TRANSPOSE_VFLIP,
 };
 
+enum OrientationType {
+ORIENTATION_AUTO_TRANSPOSE = -2,
+ORIENTATION_AUTO_FLIP = -1,
+ORIENTATION_NONE = 0,
+ORIENTATION_NORMAL,
+ORIENTATION_HFLIP,
+ORIENTATION_ROTATE180,
+ORIENTATION_VFLIP,
+ORIENTATION_HFLIP_ROTATE270CW,
+ORIENTATION_ROTATE90CW,
+ORIENTATION_HFLIP_ROTATE90CW,
+ORIENTATION_ROTATE270CW
+};
+
 #endif
diff --git a/libavfilter/vf_hflip.c b/libavfilter/vf_hflip.c
index b77afc77fc..97161000fd 100644
--- a/libavfilter/vf_hflip.c
+++ b/libavfilter/vf_hflip.c
@@ -37,6 +37,7 @@
 #include "libavutil/intreadwrite.h"
 #include "libavutil/imgutils.h"
 
+
 static const AVOption hflip_options[] = {
 { NULL }
 };
@@ -125,9 +126,8 @@ static void hflip_qword_c(const uint8_t *ssrc, uint8_t 
*ddst, int w)
 dst[j] = src[-j];
 }
 
-static int config_props(AVFilterLink *inlink)
+int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink)
 {
-FlipContext *s = inlink->dst->priv;
 const AVPixFmtDescriptor *pix_desc = av_pix_fmt_desc_get(inlink->format);
 const int hsub = pix_desc->log2_chroma_w;
 const int vsub = pix_desc->log2_chroma_h;
@@ -144,6 +144,12 @@ static int config_props(AVFilterLink *inlink)
 return ff_hflip_init(s, s->max_step, nb_planes);
 }
 
+static int config_props(AVFilterLink *inlink)
+{
+FlipContext *s = inlink->dst->priv;
+return ff_hflip_config_props(s, inlink);
+}
+
 int ff_hflip_init(FlipContext *s, int step[4], int nb_planes)
 {
 int i;
@@ -170,14 +176,10 @@ typedef struct ThreadData {
 AVFrame *in, *out;
 } ThreadData;
 
-static int filter_slice(AVFilterContext *ctx, void *arg, int job, int nb_jobs)
+int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int job, 
int nb_jobs, int vflip)
 {
-FlipContext *s = ctx->priv;
-ThreadData *td = arg;
-AVFrame *in = td->in;
-AVFrame *out = td->out;
 uint8_t *inrow, *outrow;
-int i, plane, step;
+int i, plane, step, outlinesize;
 
 for (plane = 0; plane < 4 && in->data[plane] && in->linesize[plane]; 
plane++) {
 const int width  = s->planewidth[plane];
@@ -187,19 +189,36 @@ static int filter_slice(AVFilterContext *ctx, void *arg, 
int job, int nb_jobs)
 
 step = s->max_step[plane];
 
-outrow = out->data[plane] + start * out->linesize[plane];
-inrow  = in ->data[plane] + start * in->linesize[plane] + (width - 1) 
* step;
+if (vflip) {
+outrow = out->data[plane] + (height - start - 1)* 
out->linesize[plane];
+outlinesize = -out->linesize[plane];
+} else {
+outrow = out->data[plane] + start * out->linesize[plane];
+outlinesize = out->linesize[plane];
+}
+
+inrow = in->data[plane] + start * in->linesize[plane] +  (width - 1) * 
step;
+
 for (i = start; i < end; i++) {
 s->flip_line[plane](inrow, outrow, width);
 
 inrow  += in ->linesize[plane];
-outrow += out->linesize[plane];
+outrow += outlinesize;
 }
 }
 
 return 0;
 }
 
+static int filter_slice(AVFilterContext *ctx, void *arg, int job, int nb_jobs)
+{
+FlipContext *s = ctx->priv;
+ThreadData *td = arg;
+AVFrame *in = td->in;
+AVFrame *out = td->out;
+return ff_hflip_filter_slice(s, in, out, job, nb_jobs, 0);
+}
+
 static int filter_frame(AVFilterLink *inlink, AVFrame *in)
 {
 AVFilterContext *ctx  = inlink->dst;
diff --git a/libavfilter/vf_transpose.c b/libavfilter/vf_transpose.c
index dd54947bd9..5ff86536ad 100644
--- a/libavfilter/vf_transpose.c
+++ b/libavfilter/vf_transpose.c
@@ -39,6 +39,7 @@
 #include "internal.h"
 #include "video.h"
 #include "transpose.h"
+#include "hflip.h"
 
 typedef struct TransVtable {
 void (*transpose_8x8)(uint8_t *src, ptrdiff_t src_linesize,
@@ -48,16 +49,24 @@ typedef struct TransVtable {
 int w, int h);
 } TransVtable;
 
-typedef struct TransContext {

Re: [FFmpeg-devel] [PATCH v3 1/2] lavf/vf_transpose: add exif orientation support

2019-05-28 Thread Jun Li
On Tue, May 28, 2019 at 7:21 AM Paul B Mahol  wrote:

> On 5/28/19, Jun Li  wrote:
> > Add exif orientation support and expose an option.
> > ---
> >  libavfilter/hflip.h|   3 +
> >  libavfilter/vf_hflip.c |  43 ++---
> >  libavfilter/vf_transpose.c | 173 +
> >  3 files changed, 171 insertions(+), 48 deletions(-)
> >
> > diff --git a/libavfilter/hflip.h b/libavfilter/hflip.h
> > index 204090dbb4..3d837f01b0 100644
> > --- a/libavfilter/hflip.h
> > +++ b/libavfilter/hflip.h
> > @@ -35,5 +35,8 @@ typedef struct FlipContext {
> >
> >  int ff_hflip_init(FlipContext *s, int step[4], int nb_planes);
> >  void ff_hflip_init_x86(FlipContext *s, int step[4], int nb_planes);
> > +int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink);
> > +int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int
> > job, int nb_jobs, int vlifp);
> > +
> >
> >  #endif /* AVFILTER_HFLIP_H */
> > diff --git a/libavfilter/vf_hflip.c b/libavfilter/vf_hflip.c
> > index b77afc77fc..84fd8975b1 100644
> > --- a/libavfilter/vf_hflip.c
> > +++ b/libavfilter/vf_hflip.c
> > @@ -37,6 +37,7 @@
> >  #include "libavutil/intreadwrite.h"
> >  #include "libavutil/imgutils.h"
> >
> > +
> >  static const AVOption hflip_options[] = {
> >  { NULL }
> >  };
> > @@ -125,9 +126,8 @@ static void hflip_qword_c(const uint8_t *ssrc,
> uint8_t
> > *ddst, int w)
> >  dst[j] = src[-j];
> >  }
> >
> > -static int config_props(AVFilterLink *inlink)
> > +int ff_hflip_config_props(FlipContext* s, AVFilterLink *inlink)
> >  {
> > -FlipContext *s = inlink->dst->priv;
> >  const AVPixFmtDescriptor *pix_desc =
> > av_pix_fmt_desc_get(inlink->format);
> >  const int hsub = pix_desc->log2_chroma_w;
> >  const int vsub = pix_desc->log2_chroma_h;
> > @@ -140,10 +140,15 @@ static int config_props(AVFilterLink *inlink)
> >  s->planeheight[1] = s->planeheight[2] = AV_CEIL_RSHIFT(inlink->h,
> > vsub);
> >
> >  nb_planes = av_pix_fmt_count_planes(inlink->format);
> > -
> >  return ff_hflip_init(s, s->max_step, nb_planes);
> >  }
> >
> > +static int config_props(AVFilterLink *inlink)
> > +{
> > +FlipContext *s = inlink->dst->priv;
> > +return ff_hflip_config_props(s, inlink);
> > +}
> > +
> >  int ff_hflip_init(FlipContext *s, int step[4], int nb_planes)
> >  {
> >  int i;
> > @@ -170,14 +175,10 @@ typedef struct ThreadData {
> >  AVFrame *in, *out;
> >  } ThreadData;
> >
> > -static int filter_slice(AVFilterContext *ctx, void *arg, int job, int
> > nb_jobs)
> > +int ff_hflip_filter_slice(FlipContext *s, AVFrame *in, AVFrame *out, int
> > job, int nb_jobs, int vflip)
> >  {
> > -FlipContext *s = ctx->priv;
> > -ThreadData *td = arg;
> > -AVFrame *in = td->in;
> > -AVFrame *out = td->out;
> >  uint8_t *inrow, *outrow;
> > -int i, plane, step;
> > +int i, plane, step, outlinesize;
> >
> >  for (plane = 0; plane < 4 && in->data[plane] && in->linesize[plane];
> > plane++) {
> >  const int width  = s->planewidth[plane];
> > @@ -187,19 +188,35 @@ static int filter_slice(AVFilterContext *ctx, void
> > *arg, int job, int nb_jobs)
> >
> >  step = s->max_step[plane];
> >
> > -outrow = out->data[plane] + start * out->linesize[plane];
> > -inrow  = in ->data[plane] + start * in->linesize[plane] +
> (width -
> > 1) * step;
> > +if (vflip) {
> > +outrow = out->data[plane] + (height - start - 1)*
> > out->linesize[plane];
> > +outlinesize = -out->linesize[plane];
> > +} else {
> > +outrow = out->data[plane] + start * out->linesize[plane];
> > +outlinesize = out->linesize[plane];
> > +}
> > +
> > +inrow = in->data[plane] + start * in->linesize[plane] +  (width
> -
> > 1) * step;
> > +
> >  for (i = start; i < end; i++) {
> >  s->flip_line[plane](inrow, outrow, width);
> >
> >  inrow  += in ->linesize[plane];
> > -outrow += out->linesize[plane];
> > +outrow += outlinesize;
> >  }
> >  }
> > -
> >  return 0;
> >  }
> >
> > +static int filter_slice(AVFilterContext *ctx, void *arg, int job, int
> > nb_jobs)
> > +{
> > +FlipContext *s = ctx->priv;
> > +ThreadData *td = arg;
> > +AVFrame *in = td->in;
> > +AVFrame *out = td->out;
> > +return ff_hflip_filter_slice(s, in, out, job, nb_jobs, 0);
> > +}
> > +
> >  static int filter_frame(AVFilterLink *inlink, AVFrame *in)
> >  {
> >  AVFilterContext *ctx  = inlink->dst;
> > diff --git a/libavfilter/vf_transpose.c b/libavfilter/vf_transpose.c
> > index dd54947bd9..9cc07a1e6f 100644
> > --- a/libavfilter/vf_transpose.c
> > +++ b/libavfilter/vf_transpose.c
> > @@ -39,6 +39,7 @@
> >  #include "internal.h"
> >  #include "video.h"
> >  #include "transpose.h"
> > +#include "hflip.h"
> >
> >  typedef struct TransVtable {
> >  void (*transpose_8x8)(uint8_t *src, ptrdiff_t src_linesize,
> > @@ -48,1

Re: [FFmpeg-devel] [PATCH v3 2/2] fftools/ffmpeg: add exif orientation support per frame's metadata

2019-05-28 Thread Jun Li
On Tue, May 28, 2019 at 1:50 PM Michael Niedermayer 
wrote:

> On Mon, May 27, 2019 at 11:18:26PM -0700, Jun Li wrote:
> > Fix #6945
> > Rotate or/and flip frame according to frame's metadata orientation
> > ---
> >  fftools/ffmpeg.c| 16 +++-
> >  fftools/ffmpeg.h|  3 ++-
> >  fftools/ffmpeg_filter.c | 28 +++-
> >  3 files changed, 40 insertions(+), 7 deletions(-)
> >
> > diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
> > index 01f04103cf..2f4229a9d0 100644
> > --- a/fftools/ffmpeg.c
> > +++ b/fftools/ffmpeg.c
> > @@ -2126,6 +2126,19 @@ static int
> ifilter_has_all_input_formats(FilterGraph *fg)
> >  return 1;
> >  }
> >
> > +static int orientation_need_update(InputFilter *ifilter, AVFrame *frame)
> > +{
> > +int orientaion = get_frame_orientation(frame);
> > +int filterst = ifilter->orientation <= 1 ? 0 : // not set
> > +   ifilter->orientation <= 4 ? 1 : // auto flip
> > +   2; // auto transpose
> > +int framest = orientaion <= 1 ? 0 : // not set
> > +  orientaion <= 4 ? 1 : // auto flip
> > +  2; // auto transpose
> > +
> > +return filterst != framest;
> > +}
> > +
> >  static int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame)
> >  {
> >  FilterGraph *fg = ifilter->graph;
> > @@ -2142,7 +2155,8 @@ static int ifilter_send_frame(InputFilter
> *ifilter, AVFrame *frame)
> >  break;
> >  case AVMEDIA_TYPE_VIDEO:
> >  need_reinit |= ifilter->width  != frame->width ||
> > -   ifilter->height != frame->height;
> > +   ifilter->height != frame->height ||
> > +   orientation_need_update(ifilter, frame);
> >  break;
> >  }
> >
> > diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
> > index eb1eaf6363..54532ef0eb 100644
> > --- a/fftools/ffmpeg.h
> > +++ b/fftools/ffmpeg.h
> > @@ -244,7 +244,7 @@ typedef struct InputFilter {
> >  // parameters configured for this input
> >  int format;
> >
> > -int width, height;
> > +int width, height, orientation;
> >  AVRational sample_aspect_ratio;
> >
> >  int sample_rate;
> > @@ -649,6 +649,7 @@ int init_complex_filtergraph(FilterGraph *fg);
> >  void sub2video_update(InputStream *ist, AVSubtitle *sub);
> >
> >  int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame
> *frame);
> > +int get_frame_orientation(const AVFrame* frame);
> >
> >  int ffmpeg_parse_options(int argc, char **argv);
> >
> > diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
> > index 72838de1e2..ff63540906 100644
> > --- a/fftools/ffmpeg_filter.c
> > +++ b/fftools/ffmpeg_filter.c
> > @@ -743,6 +743,18 @@ static int sub2video_prepare(InputStream *ist,
> InputFilter *ifilter)
> >  return 0;
> >  }
> >
> > +int get_frame_orientation(const AVFrame *frame)
> > +{
> > +AVDictionaryEntry *entry = NULL;
> > +int orientation = 0;
> > +
> > +// read exif orientation data
> > +entry = av_dict_get(frame->metadata, "Orientation", NULL, 0);
>
> > +if (entry)
> > +orientation = atoi(entry->value);
>
> this probably should be checking the validity of the string unless
> it has been checked already elsewhere


Thanks Michael for the catch. I updated to version 5:
https://patchwork.ffmpeg.org/patch/13321/
https://patchwork.ffmpeg.org/patch/13322/

Best Regards,
Jun


> thx
>
> [...]
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> I have never wished to cater to the crowd; for what I know they do not
> approve, and what they approve I do not know. -- Epicurus
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".