[FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread Zhao Zhili
From: Zhao Zhili 

Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.

For example, GoPro created a 'GoPro SOS' stream:
  Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
Metadata:
  creation_time   : 2022-06-21T08:49:19.00Z
  handler_name: GoPro SOS

With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.

Signed-off-by: Zhao Zhili 
---
 libavformat/mov.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index 88669faa70..2a4eb79f27 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, 
MOVAtom atom)
 st->nb_frames= total_sample_count;
 if (duration)
 st->duration= FFMIN(st->duration, duration);
+
+// All samples have zero duration. They have higher chance be chose by
+// mov_find_next_sample, which leads to seek again and again.
+//
+// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
+// So only mark data stream as discarded for safety.
+if (!duration && sc->stts_count &&
+st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
+av_log(c->fc, AV_LOG_WARNING,
+   "All samples in data stream index:id [%d:%d] have zero 
duration, "
+   "discard the stream\n",
+   st->index, st->id);
+st->discard = AVDISCARD_ALL;
+}
 sc->track_end = duration;
 return 0;
 }
-- 
2.35.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] fftools: Fix preset search pathes (regression since 13350e81fd)

2022-07-05 Thread Nicolas Gaullier
>Envoyé : jeudi 30 juin 2022 12:41
>Objet : [PATCH] fftools: Fix preset search pathes (regression since 13350e81fd)
>
>Fix looking for .ffmpeg subfolder in FFMPEG_DATADIR and inversely not in HOME.
>Fix search order (documentation).

It seems I have not received any feedback for this simple fix ? Please tell me 
if there is anything I should amend about the comments or whatever.
Thanks.
Nicolas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avdevice/lavfi: output wrapped AVFrames

2022-07-05 Thread Timo Rothenpieler

On 02.07.2022 10:54, Michael Niedermayer wrote:

seems to get this stuck:
./ffmpeg -f lavfi -i 
'amovie=fate-suite/wavpack/num_channels/eva_2.22_6.1_16bit-partial.wv,asplit=3[out1][a][b];
 [a]showwaves=s=340x240,pad=iw:ih*2[waves]; 
[b]showspectrum=s=340x240[spectrum]; [waves][spectrum] overlay=0:h [out0]'   
file-waves.avi

(stuck as in OOM killed)


Yeah, same thing is happening here. Very odd for sure.
Seems like it's just buffering an endless amount of frames, without ever 
starting to read and output them.

No idea why (yet)
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/3] lavc: add standalone cached bitstream reader

2022-07-05 Thread Anton Khirnov
Quoting Andreas Rheinhardt (2022-07-03 15:16:39)
> Anton Khirnov:
> > +
> > +#include "mathops.h"
> 
> What exactly is mathops.h for? get_bits.h uses it for NEG_USR32 and
> NEG_SSR32, but you are not using it here.

sign_extend()?

> > +/**
> > + * Skip bits to a byte boundary.
> > + */
> > +static inline const uint8_t *bits_align(BitstreamContext *bc)
> > +{
> > +unsigned int n = -bits_tell(bc) & 7;
> > +if (n)
> > +bits_skip(bc, n);
> 
> Is there a reason that I don't see that makes you not simply use
> bc->bits_left &= ~0xff?

I don't see how that is supposed to work.

> 
> > +return bc->buffer + (bits_tell(bc) >> 3);
> > +}
> > +
> > +/**
> > + * Read MPEG-1 dc-style VLC (sign bit + mantissa with no MSB).
> > + * If MSB not set it is negative.
> > + * @param n length in bits
> > + */
> > +static inline int bits_read_xbits(BitstreamContext *bc, unsigned int n)
> > +{
> > +int32_t cache = bits_peek(bc, 32);
> > +int sign = ~cache >> 31;
> > +bits_priv_skip_remaining(bc, n);
> 
> FYI: You are potentially skipping more bits here than you have.

Yes, you made that clear in your fake-caching patch. Do you require that
be fixed now? Or can this go in, then we apply some form of your patch?

> > +/* Read sign bit and flip the sign of the provided value accordingly. */
> > +static inline int bits_apply_sign(BitstreamContext *bc, int val)
> > +{
> > +int sign = bits_read_signed(bc, 1);
> 
> Is there a reason you are not using bits_read_bit here?

I want a signed result.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 1/2] avformat/cafdec: Implement FLAC-in-CAF parsing

2022-07-05 Thread Martijn van Beurden
Op za 11 jun. 2022 om 09:31 schreef Martijn van Beurden :

> The afconvert utility shipped with MacOS supports muxing of FLAC
> in CAF, see afconvert help output on a recent Mac here:
> https://hydrogenaud.io/index.php?topic=122509.0 A file created
> with afconvert free of copyright (licensed CC0) can be found here:
> http://www.audiograaf.nl/misc_stuff/afconvert-FLAC-in-CAF.caf
>
> This patch implements parsing of such a file
> ---
>  libavformat/caf.c|  1 +
>  libavformat/cafdec.c | 44 
>  2 files changed, 45 insertions(+)
>
> diff --git a/libavformat/caf.c b/libavformat/caf.c
> index a700e4055b..a61c39fae5 100644
> --- a/libavformat/caf.c
> +++ b/libavformat/caf.c
> @@ -46,6 +46,7 @@ const AVCodecTag ff_codec_caf_tags[] = {
>  { AV_CODEC_ID_GSM, MKTAG('a','g','s','m') },
>  { AV_CODEC_ID_GSM_MS,  MKTAG('m','s', 0, '1') },
>  { AV_CODEC_ID_ILBC,MKTAG('i','l','b','c') },
> +{ AV_CODEC_ID_FLAC,MKTAG('f','l','a','c') },
>  { AV_CODEC_ID_MACE3,   MKTAG('M','A','C','3') },
>  { AV_CODEC_ID_MACE6,   MKTAG('M','A','C','6') },
>  { AV_CODEC_ID_MP1, MKTAG('.','m','p','1') },
> diff --git a/libavformat/cafdec.c b/libavformat/cafdec.c
> index 168f69f20b..ced61643f8 100644
> --- a/libavformat/cafdec.c
> +++ b/libavformat/cafdec.c
> @@ -32,6 +32,7 @@
>  #include "internal.h"
>  #include "isom.h"
>  #include "mov_chan.h"
> +#include "libavcodec/flac.h"
>  #include "libavutil/intreadwrite.h"
>  #include "libavutil/intfloat.h"
>  #include "libavutil/dict.h"
> @@ -170,6 +171,49 @@ static int read_kuki_chunk(AVFormatContext *s,
> int64_t size)
>  }
>  avio_skip(pb, size - ALAC_NEW_KUKI);
>  }
> +} else if (st->codecpar->codec_id == AV_CODEC_ID_FLAC) {
> +int last, type, flac_metadata_size;
> +uint8_t buf[4];
> +/* The magic cookie format for FLAC consists mostly of an mp4
> dfLa atom. */
> +if (size < (16 + FLAC_STREAMINFO_SIZE)) {
> +av_log(s, AV_LOG_ERROR, "invalid FLAC magic cookie\n");
> +return AVERROR_INVALIDDATA;
> +}
> +/* Check cookie version. */
> +if (avio_r8(pb) != 0) {
> +av_log(s, AV_LOG_ERROR, "unknown FLAC magic cookie\n");
> +return AVERROR_INVALIDDATA;
> +}
> +avio_rb24(pb); /* Flags */
> +/* read dfLa fourcc */
> +if (avio_read(pb, buf, 4) != 4) {
> +av_log(s, AV_LOG_ERROR, "failed to read FLAC magic cookie\n");
> +return (pb->error < 0 ? pb->error : AVERROR_INVALIDDATA);
> +}
> +if (memcmp(buf,"dfLa",4)) {
> +av_log(s, AV_LOG_ERROR, "invalid FLAC magic cookie\n");
> +return AVERROR_INVALIDDATA;
> +}
> +/* Check dfLa version. */
> +if (avio_r8(pb) != 0) {
> +av_log(s, AV_LOG_ERROR, "unknown dfLa version\n");
> +return AVERROR_INVALIDDATA;
> +}
> +avio_rb24(pb); /* Flags */
> +if (avio_read(pb, buf, sizeof(buf)) != sizeof(buf)) {
> +av_log(s, AV_LOG_ERROR, "failed to read FLAC metadata block
> header\n");
> +return (pb->error < 0 ? pb->error : AVERROR_INVALIDDATA);
> +}
> +flac_parse_block_header(buf, &last, &type, &flac_metadata_size);
> +if (type != FLAC_METADATA_TYPE_STREAMINFO || flac_metadata_size
> != FLAC_STREAMINFO_SIZE) {
> +av_log(s, AV_LOG_ERROR, "STREAMINFO must be first
> FLACMetadataBlock\n");
> +return AVERROR_INVALIDDATA;
> +}
> +ret = ff_get_extradata(s, st->codecpar, pb, FLAC_STREAMINFO_SIZE);
> +if (ret < 0)
> +return ret;
> +if (!last)
> +av_log(s, AV_LOG_WARNING, "non-STREAMINFO
> FLACMetadataBlock(s) ignored\n");
>  } else if (st->codecpar->codec_id == AV_CODEC_ID_OPUS) {
>  // The data layout for Opus is currently unknown, so we do not
> export
>  // extradata at all. Multichannel streams are not supported.
> --
> 2.30.2
>
>
I would like to bring this patch to the attention of the mailing list
again. It still applies against current git.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread Gyan Doshi



On 2022-07-05 01:20 pm, Zhao Zhili wrote:

From: Zhao Zhili 

Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.

For example, GoPro created a 'GoPro SOS' stream:
   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
 Metadata:
   creation_time   : 2022-06-21T08:49:19.00Z
   handler_name: GoPro SOS

With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.

Signed-off-by: Zhao Zhili 
---
  libavformat/mov.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index 88669faa70..2a4eb79f27 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, 
MOVAtom atom)
  st->nb_frames= total_sample_count;
  if (duration)
  st->duration= FFMIN(st->duration, duration);
+
+// All samples have zero duration. They have higher chance be chose by
+// mov_find_next_sample, which leads to seek again and again.
+//
+// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
+// So only mark data stream as discarded for safety.
+if (!duration && sc->stts_count &&
+st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
+av_log(c->fc, AV_LOG_WARNING,
+   "All samples in data stream index:id [%d:%d] have zero duration, 
"
+   "discard the stream\n",
+   st->index, st->id);
+st->discard = AVDISCARD_ALL;
+}
  sc->track_end = duration;
  return 0;
  }


So this will allow audio and video streams to be demuxed, but not data?  
That distinction seems arbitrary.


Print a warning and assign a duration to each sample. Either 1 or if not 
zero/Inf, st->duration/st->nb_frames.


Regards,
Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavu: always provide symbols from hwcontext_vulkan.h

2022-07-05 Thread J. Dekker
On 5 Jul 2022, at 2:11, Niklas Haas wrote:

> From: Niklas Haas 
>
> This header is unconditionally installed, even though the utility
> functions defined by it may be missing from the built library.
>
> A precedent set by e.g. libavcodec/qsv.h (and others) is to always
> provide these functions by compiling stub functions in the absence of
> CONFIG_*. Make hwcontext_vulkan.h match this convention.
>
> Fixes downstream issues, e.g.
> https://github.com/haasn/libplacebo/issues/120
>
> Signed-off-by: Niklas Haas 
> ---
>  libavutil/Makefile   |  2 +-
>  libavutil/hwcontext_vulkan.c | 26 --
>  2 files changed, 25 insertions(+), 3 deletions(-)
>
> [...]

Public API symbols (av_*) shouldn't completely disappear based on configure 
options.

LGTM.

-- 
J. Dekker
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] filter queue question

2022-07-05 Thread Alex
Hi!
I developing custom GPU filter that require lot of time to process frames and 
as result overal fps is low ( around 20 fps):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -f null -

But then I added actual encoding part to ffmpeg command, result fps is down to 
16 fps (-4 fps, around 20%!!!):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -c:v h264 -y out.mp4

If I look at timeline of overla process in each cycle:

|decoding time---| ---> |--filtering 
time-|  ---> |---encoding time---|

So, basically can I process frame in my custom filter without waiting for 
encoding to finish?
In other word I want to process frames in my custom filter in parallel/queue to 
encoding process???

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] filter queue question

2022-07-05 Thread Felix LeClair


From: ffmpeg-devel  on behalf of Alex 
<3.1...@ukr.net>
Sent: July 5, 2022 9:00 AM
To: FFmpeg development discussions and patches 
Subject: [FFmpeg-devel] filter queue question

Hi!
I developing custom GPU filter that require lot of time to process frames and 
as result overal fps is low ( around 20 fps):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -f null -

But then I added actual encoding part to ffmpeg command, result fps is down to 
16 fps (-4 fps, around 20%!!!):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -c:v h264 -y out.mp4

If I look at timeline of overla process in each cycle:

|decoding time---| ---> |--filtering 
time-|  ---> |---encoding time---|

So, basically can I process frame in my custom filter without waiting for 
encoding to finish?
In other word I want to process frames in my custom filter in parallel/queue to 
encoding process???

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".



In concept yes, but you may be better off improving the speed of the underlying 
filter itself.

Part of the cost of encoding is beyond the encoder itself, you have to account 
for file-system overhead, disk I/O speed etc.

Depending on your implementation, you may be running into issues with memory 
copies from system to GPU memory and back, which is quite expensive.

try testing using a "hardware decoder--> your filter-->hardware encoder" chain 
to keep everything in GPU memory.

Beyond that, standard GPU acceleration rules apply. Make sure your wave 
fronts/work groups are aligned, check for system utilization, use 
non-blocking/async calls when possible, etc.

-Felix (FCLC)


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread zhilizhao(赵志立)


> On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:
> 
> 
> 
> On 2022-07-05 01:20 pm, Zhao Zhili wrote:
>> From: Zhao Zhili 
>> 
>> Streams with all zero sample_delta in 'stts' have all zero dts.
>> They have higher chance be chose by mov_find_next_sample(), which
>> leads to seek again and again.
>> 
>> For example, GoPro created a 'GoPro SOS' stream:
>>   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
>> Metadata:
>>   creation_time   : 2022-06-21T08:49:19.00Z
>>   handler_name: GoPro SOS
>> 
>> With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
>> blocks until all samples in 'GoPro SOS' stream are consumed first.
>> 
>> Signed-off-by: Zhao Zhili 
>> ---
>>  libavformat/mov.c | 14 ++
>>  1 file changed, 14 insertions(+)
>> 
>> diff --git a/libavformat/mov.c b/libavformat/mov.c
>> index 88669faa70..2a4eb79f27 100644
>> --- a/libavformat/mov.c
>> +++ b/libavformat/mov.c
>> @@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext 
>> *pb, MOVAtom atom)
>>  st->nb_frames= total_sample_count;
>>  if (duration)
>>  st->duration= FFMIN(st->duration, duration);
>> +
>> +// All samples have zero duration. They have higher chance be chose by
>> +// mov_find_next_sample, which leads to seek again and again.
>> +//
>> +// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
>> +// So only mark data stream as discarded for safety.
>> +if (!duration && sc->stts_count &&
>> +st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
>> +av_log(c->fc, AV_LOG_WARNING,
>> +   "All samples in data stream index:id [%d:%d] have zero 
>> duration, "
>> +   "discard the stream\n",
>> +   st->index, st->id);
>> +st->discard = AVDISCARD_ALL;
>> +}
>>  sc->track_end = duration;
>>  return 0;
>>  }
> 
> So this will allow audio and video streams to be demuxed, but not data?  That 
> distinction seems arbitrary.

Disable audio/video streams may create regression. It’s unlikely for random
and broken data stream.

> 
> Print a warning and assign a duration to each sample. Either 1 or if not 
> zero/Inf, st->duration/st->nb_frames.

Set sample_duration to 1 doesn’t work. Dts still far behind other streams.

Set sample_duration st->duration/st->nb_frames works for me, but I prefer
current strategy for the following reasons:

1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
of trying correction and hope it works, which may not, e.g., st->duration
is broken, or bad interleave even though we fixed sample_duration.

2. libavformat users can enable the stream and get the original dts/duration,
if they want to.

> 
> Regards,
> Gyan
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v2] avdevice/lavfi: output wrapped AVFrames

2022-07-05 Thread Timo Rothenpieler
This avoids an extra copy of potentially quite big video frames.
Instead of copying the entire frames data into a rawvideo packet it
packs the frame into a wrapped avframe packet and passes it through
as-is.
Unfortunately, wrapped avframes are set up to be video frames, so the
audio frames unfortunately continue to be copied.

Additionally, this enabled passing through video frames that previously
were impossible to process, like hardware frames or other special
formats that couldn't be packed into a rawvideo packet.
---

Change in v2: Remove internal ret variable, shadowing outer one,
causing busy loop due to 0 return on error.

 libavdevice/lavfi.c   | 88 +--
 tests/ref/fate/filter-metadata-cropdetect |  3 +-
 2 files changed, 36 insertions(+), 55 deletions(-)

diff --git a/libavdevice/lavfi.c b/libavdevice/lavfi.c
index db5d0b94de..4689136ab0 100644
--- a/libavdevice/lavfi.c
+++ b/libavdevice/lavfi.c
@@ -54,32 +54,10 @@ typedef struct {
 int *sink_eof;
 int *stream_sink_map;
 int *sink_stream_subcc_map;
-AVFrame *decoded_frame;
 int nb_sinks;
 AVPacket subcc_packet;
 } LavfiContext;
 
-static int *create_all_formats(int n)
-{
-int i, j, *fmts, count = 0;
-
-for (i = 0; i < n; i++) {
-const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(i);
-if (!(desc->flags & AV_PIX_FMT_FLAG_HWACCEL))
-count++;
-}
-
-if (!(fmts = av_malloc_array(count + 1, sizeof(*fmts
-return NULL;
-for (j = 0, i = 0; i < n; i++) {
-const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(i);
-if (!(desc->flags & AV_PIX_FMT_FLAG_HWACCEL))
-fmts[j++] = i;
-}
-fmts[j] = AV_PIX_FMT_NONE;
-return fmts;
-}
-
 av_cold static int lavfi_read_close(AVFormatContext *avctx)
 {
 LavfiContext *lavfi = avctx->priv_data;
@@ -90,7 +68,6 @@ av_cold static int lavfi_read_close(AVFormatContext *avctx)
 av_freep(&lavfi->sink_stream_subcc_map);
 av_freep(&lavfi->sinks);
 avfilter_graph_free(&lavfi->graph);
-av_frame_free(&lavfi->decoded_frame);
 
 return 0;
 }
@@ -125,15 +102,11 @@ av_cold static int lavfi_read_header(AVFormatContext 
*avctx)
 LavfiContext *lavfi = avctx->priv_data;
 AVFilterInOut *input_links = NULL, *output_links = NULL, *inout;
 const AVFilter *buffersink, *abuffersink;
-int *pix_fmts = create_all_formats(AV_PIX_FMT_NB);
 enum AVMediaType type;
 int ret = 0, i, n;
 
 #define FAIL(ERR) { ret = ERR; goto end; }
 
-if (!pix_fmts)
-FAIL(AVERROR(ENOMEM));
-
 buffersink = avfilter_get_by_name("buffersink");
 abuffersink = avfilter_get_by_name("abuffersink");
 
@@ -264,8 +237,6 @@ av_cold static int lavfi_read_header(AVFormatContext *avctx)
 ret = avfilter_graph_create_filter(&sink, buffersink,
inout->name, NULL,
NULL, lavfi->graph);
-if (ret >= 0)
-ret = av_opt_set_int_list(sink, "pix_fmts", pix_fmts,  
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
 if (ret < 0)
 goto end;
 } else if (type == AVMEDIA_TYPE_AUDIO) {
@@ -321,15 +292,11 @@ av_cold static int lavfi_read_header(AVFormatContext 
*avctx)
 avpriv_set_pts_info(st, 64, time_base.num, time_base.den);
 par->codec_type = av_buffersink_get_type(sink);
 if (par->codec_type == AVMEDIA_TYPE_VIDEO) {
-int64_t probesize;
-par->codec_id   = AV_CODEC_ID_RAWVIDEO;
+par->codec_id   = AV_CODEC_ID_WRAPPED_AVFRAME;
 par->format = av_buffersink_get_format(sink);
 par->width  = av_buffersink_get_w(sink);
 par->height = av_buffersink_get_h(sink);
-probesize   = par->width * par->height * 30 *
-  
av_get_padded_bits_per_pixel(av_pix_fmt_desc_get(par->format));
-avctx->probesize = FFMAX(avctx->probesize, probesize);
-st   ->sample_aspect_ratio =
+st ->sample_aspect_ratio =
 par->sample_aspect_ratio = 
av_buffersink_get_sample_aspect_ratio(sink);
 } else if (par->codec_type == AVMEDIA_TYPE_AUDIO) {
 par->sample_rate = av_buffersink_get_sample_rate(sink);
@@ -348,11 +315,7 @@ av_cold static int lavfi_read_header(AVFormatContext 
*avctx)
 if ((ret = create_subcc_streams(avctx)) < 0)
 goto end;
 
-if (!(lavfi->decoded_frame = av_frame_alloc()))
-FAIL(AVERROR(ENOMEM));
-
 end:
-av_free(pix_fmts);
 avfilter_inout_free(&input_links);
 avfilter_inout_free(&output_links);
 return ret;
@@ -378,15 +341,20 @@ static int create_subcc_packet(AVFormatContext *avctx, 
AVFrame *frame,
 return 0;
 }
 
+static void lavfi_free_frame(void *opaque, uint8_t *data)
+{
+AVFrame *frame = (AVFrame*)data;
+av_frame_free(&frame);
+}
+
 static int lavfi_read_packet(AVFormatCon

Re: [FFmpeg-devel] filter queue question

2022-07-05 Thread Alex
Thanks I will chek it out!

For now my filter use standart filter_frame() function callback. But how to 
request next frame from decoder in my filter? 


5 July 2022, 16:19:49, by "Felix LeClair" :

From: ffmpeg-devel  on behalf of Alex 
<3.1...@ukr.net>
Sent: July 5, 2022 9:00 AM
To: FFmpeg development discussions and patches 
Subject: [FFmpeg-devel] filter queue question

Hi!
I developing custom GPU filter that require lot of time to process frames and 
as result overal fps is low ( around 20 fps):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -f null -

But then I added actual encoding part to ffmpeg command, result fps is down to 
16 fps (-4 fps, around 20%!!!):

ffmpeg -i 720p.mp4  -filter_complex "format=rgb24,myfilter" -c:v h264 -y out.mp4

If I look at timeline of overla process in each cycle:

|decoding time---| ---> |--filtering 
time-|  ---> |---encoding time---|

So, basically can I process frame in my custom filter without waiting for 
encoding to finish?
In other word I want to process frames in my custom filter in parallel/queue to 
encoding process???

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".



In concept yes, but you may be better off improving the speed of the underlying 
filter itself.

Part of the cost of encoding is beyond the encoder itself, you have to account 
for file-system overhead, disk I/O speed etc.

Depending on your implementation, you may be running into issues with memory 
copies from system to GPU memory and back, which is quite expensive.

try testing using a "hardware decoder--> your filter-->hardware encoder" chain 
to keep everything in GPU memory.

Beyond that, standard GPU acceleration rules apply. Make sure your wave 
fronts/work groups are aligned, check for system utilization, use 
non-blocking/async calls when possible, etc.

-Felix (FCLC)


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3 2/2] lavc/tests: add a cached bitstream reader test

2022-07-05 Thread Anton Khirnov
---
 libavcodec/Makefile   |   2 +
 libavcodec/tests/bitstream_be.c   |  19 +++
 libavcodec/tests/bitstream_le.c   |  20 +++
 libavcodec/tests/bitstream_template.c | 180 ++
 tests/fate/libavcodec.mak |  10 ++
 5 files changed, 231 insertions(+)
 create mode 100644 libavcodec/tests/bitstream_be.c
 create mode 100644 libavcodec/tests/bitstream_le.c
 create mode 100644 libavcodec/tests/bitstream_template.c

diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 457ec58377..8802001ef1 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -1254,6 +1254,8 @@ SKIPHEADERS-$(CONFIG_V4L2_M2M) += v4l2_buffers.h 
v4l2_context.h v4l2_m2m
 
 TESTPROGS = avcodec \
 avpacket\
+bitstream_be\
+bitstream_le\
 celp_math   \
 codec_desc  \
 htmlsubtitles   \
diff --git a/libavcodec/tests/bitstream_be.c b/libavcodec/tests/bitstream_be.c
new file mode 100644
index 00..bc562ed3b1
--- /dev/null
+++ b/libavcodec/tests/bitstream_be.c
@@ -0,0 +1,19 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "bitstream_template.c"
diff --git a/libavcodec/tests/bitstream_le.c b/libavcodec/tests/bitstream_le.c
new file mode 100644
index 00..a907676438
--- /dev/null
+++ b/libavcodec/tests/bitstream_le.c
@@ -0,0 +1,20 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#define BITSTREAM_READER_LE
+#include "bitstream_template.c"
diff --git a/libavcodec/tests/bitstream_template.c 
b/libavcodec/tests/bitstream_template.c
new file mode 100644
index 00..5b5864c5db
--- /dev/null
+++ b/libavcodec/tests/bitstream_template.c
@@ -0,0 +1,180 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#define ASSERT_LEVEL 2
+
+#include "libavutil/avassert.h"
+#include "libavutil/lfg.h"
+#include "libavutil/random_seed.h"
+
+#include "libavcodec/bitstream.h"
+#include "libavcodec/defs.h"
+
+#ifdef BITSTREAM_READER_LE
+#define BITSTREAM_WRITER_LE
+#endif
+#include "libavcodec/put_bits.h"
+
+#define SIZE 157
+
+enum Op {
+OP_READ,
+OP_READ_NZ,
+OP_READ_BIT,
+OP_READ_63,
+OP_READ_64,
+OP_READ_SIGNED,
+OP_APPLY_SIGN,
+OP_ALIGN,
+OP_NB,
+};
+
+int main(int argc, char **argv)
+{
+BitstreamContext bc;
+PutBitContextpb;
+AVLFGlfg;
+
+uint8_t buf[SIZE + AV_INPUT_BUFFER_PADDING_SIZE];
+uint8_t dst[SIZE + AV_INPUT_BUFFER_PADDING_S

[FFmpeg-devel] [PATCH v3 1/2] lavc: add standalone cached bitstream reader

2022-07-05 Thread Anton Khirnov
From: Alexandra Hájková 

The cached bitstream reader was originally written by Alexandra Hájková
for Libav, with significant input from Kostya Shishkov and Luca Barbato.
It was then committed to FFmpeg in ca079b09549, by merging it with the
implementation of the current bitstream reader.

This merge makes the code of get_bits.h significantly harder to read,
since it now contains two different bitstream readers interleaved with
 #ifdefs. Additionally, the code was committed without proper authorship
attribution.

This commit re-adds the cached bitstream reader as a standalone header,
as it was originally developed. It will be made useful in following
commits.

Integration by Anton Khirnov.

Signed-off-by: Anton Khirnov 
---
Applied most comments from Andreas, except those I commented on in my
reply to the previous thread.

bits_unget() dropped for now, as its usefulness is under question.
---
 libavcodec/bitstream.h | 529 +
 1 file changed, 529 insertions(+)
 create mode 100644 libavcodec/bitstream.h

diff --git a/libavcodec/bitstream.h b/libavcodec/bitstream.h
new file mode 100644
index 00..6fd321dba5
--- /dev/null
+++ b/libavcodec/bitstream.h
@@ -0,0 +1,529 @@
+/*
+ * Copyright (c) 2016 Alexandra Hájková
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * bitstream reader API header.
+ */
+
+#ifndef AVCODEC_BITSTREAM_H
+#define AVCODEC_BITSTREAM_H
+
+#include 
+
+#include "config.h"
+
+#include "libavutil/avassert.h"
+#include "libavutil/common.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/log.h"
+
+#include "mathops.h"
+#include "vlc.h"
+
+#ifndef UNCHECKED_BITSTREAM_READER
+#define UNCHECKED_BITSTREAM_READER !CONFIG_SAFE_BITSTREAM_READER
+#endif
+
+typedef struct BitstreamContext {
+uint64_t bits;   // stores bits read from the buffer
+const uint8_t *buffer, *buffer_end;
+const uint8_t *ptr;  // pointer to the position inside a buffer
+unsigned bits_valid; // number of bits left in bits field
+unsigned size_in_bits;
+} BitstreamContext;
+
+/**
+ * @return
+ * - 0 on successful refill
+ * - a negative number when bitstream end is hit
+ *
+ * Always succeeds when UNCHECKED_BITSTREAM_READER is enabled.
+ */
+static inline int bits_priv_refill_64(BitstreamContext *bc)
+{
+#if !UNCHECKED_BITSTREAM_READER
+if (bc->ptr >= bc->buffer_end)
+return -1;
+#endif
+
+#ifdef BITSTREAM_READER_LE
+bc->bits   = AV_RL64(bc->ptr);
+#else
+bc->bits   = AV_RB64(bc->ptr);
+#endif
+bc->ptr   += 8;
+bc->bits_valid = 64;
+
+return 0;
+}
+
+/**
+ * @return
+ * - 0 on successful refill
+ * - a negative number when bitstream end is hit
+ *
+ * Always succeeds when UNCHECKED_BITSTREAM_READER is enabled.
+ */
+static inline int bits_priv_refill_32(BitstreamContext *bc)
+{
+#if !UNCHECKED_BITSTREAM_READER
+if (bc->ptr >= bc->buffer_end)
+return -1;
+#endif
+
+#ifdef BITSTREAM_READER_LE
+bc->bits  |= (uint64_t)AV_RL32(bc->ptr) << bc->bits_valid;
+#else
+bc->bits  |= (uint64_t)AV_RB32(bc->ptr) << (32 - bc->bits_valid);
+#endif
+bc->ptr+= 4;
+bc->bits_valid += 32;
+
+return 0;
+}
+
+/**
+ * Initialize BitstreamContext.
+ * @param buffer bitstream buffer, must be AV_INPUT_BUFFER_PADDING_SIZE bytes
+ *larger than the actual read bits because some optimized bitstream
+ *readers read 32 or 64 bits at once and could read over the end
+ * @param bit_size the size of the buffer in bits
+ * @return 0 on success, AVERROR_INVALIDDATA if the buffer_size would overflow.
+ */
+static inline int bits_init(BitstreamContext *bc, const uint8_t *buffer,
+unsigned int bit_size)
+{
+unsigned int buffer_size;
+
+if (bit_size > INT_MAX - 7 || !buffer) {
+bc->buffer = NULL;
+bc->ptr= NULL;
+bc->bits_valid = 0;
+return AVERROR_INVALIDDATA;
+}
+
+buffer_size = (bit_size + 7) >> 3;
+
+bc->buffer   = buffer;
+bc->buffer_end   = buffer + buffer_size;
+bc->ptr  = bc->buffer;
+bc->size_in_bits = bit_size;
+bc->bits_valid   = 0;
+bc->bits = 0;
+
+bits_priv_refill_64(bc);
+
+return 0;
+}
+
+/**
+ * Initia

Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread Gyan Doshi



On 2022-07-05 07:05 pm, "zhilizhao(赵志立)" wrote:



On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:



On 2022-07-05 01:20 pm, Zhao Zhili wrote:

From: Zhao Zhili 

Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.

For example, GoPro created a 'GoPro SOS' stream:
   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
 Metadata:
   creation_time   : 2022-06-21T08:49:19.00Z
   handler_name: GoPro SOS

With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.

Signed-off-by: Zhao Zhili 
---
  libavformat/mov.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index 88669faa70..2a4eb79f27 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, 
MOVAtom atom)
  st->nb_frames= total_sample_count;
  if (duration)
  st->duration= FFMIN(st->duration, duration);
+
+// All samples have zero duration. They have higher chance be chose by
+// mov_find_next_sample, which leads to seek again and again.
+//
+// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
+// So only mark data stream as discarded for safety.
+if (!duration && sc->stts_count &&
+st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
+av_log(c->fc, AV_LOG_WARNING,
+   "All samples in data stream index:id [%d:%d] have zero duration, 
"
+   "discard the stream\n",
+   st->index, st->id);
+st->discard = AVDISCARD_ALL;
+}
  sc->track_end = duration;
  return 0;
  }

So this will allow audio and video streams to be demuxed, but not data?  That 
distinction seems arbitrary.

Disable audio/video streams may create regression. It’s unlikely for random
and broken data stream.


Print a warning and assign a duration to each sample. Either 1 or if not zero/Inf, 
st->duration/st->nb_frames.

Set sample_duration to 1 doesn’t work. Dts still far behind other streams.

Set sample_duration st->duration/st->nb_frames works for me, but I prefer
current strategy for the following reasons:

1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
of trying correction and hope it works, which may not, e.g., st->duration
is broken, or bad interleave even though we fixed sample_duration.


It's not about hoping that it works.  It's about not preventing the user 
from acquiring the stream payload.


Can you test if setting -discard:d none -i INPUT allows reading the 
stream with your patch?


Regards,
Gyan

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] Chromakey CUDA

2022-07-05 Thread mohamed Elhadidy
From: Mohamed Khaled Mohamed 
<56936494+mohamedelhadidy0...@users.noreply.github.com>

V2 ChromeKey CUDA

Added Chromakey CUDA filter
some edits to comply with FFmpeg coding convention

GSoC'22
Added CUDA chromakeyfilter
libavfilter/vf_chromakey_cuda.cu:the CUDA kernel for the filter
libavfilter/vf_chromakey_cuda.c: the C side that calls the kernel and gets user 
input
libavfilter/allfilters.c: added the filter to it
libavfilter/Makefile: added the filter to it
cuda/cuda_runtime.h: added two math CUDA functions that are used in the filter

---
 compat/cuda/cuda_runtime.h   |   2 +
 libavfilter/Makefile |   2 +
 libavfilter/allfilters.c |   1 +
 libavfilter/vf_chromakey_cuda.c  | 451 +++
 libavfilter/vf_chromakey_cuda.cu | 167 
 5 files changed, 623 insertions(+)
 create mode 100644 libavfilter/vf_chromakey_cuda.c
 create mode 100644 libavfilter/vf_chromakey_cuda.cu

diff --git a/compat/cuda/cuda_runtime.h b/compat/cuda/cuda_runtime.h
index 30cd085e48..5837c1ad37 100644
--- a/compat/cuda/cuda_runtime.h
+++ b/compat/cuda/cuda_runtime.h
@@ -181,7 +181,9 @@ static inline __device__ double trunc(double a) { return 
__builtin_trunc(a); }
 static inline __device__ float fabsf(float a) { return __builtin_fabsf(a); }
 static inline __device__ float fabs(float a) { return __builtin_fabsf(a); }
 static inline __device__ double fabs(double a) { return __builtin_fabs(a); }
+static inline __device__ float sqrtf(float a) { return __builtin_sqrtf(a); }
 
+static inline __device__ float __saturatef(float a) { return 
__nvvm_saturate_f(a); }
 static inline __device__ float __sinf(float a) { return 
__nvvm_sin_approx_f(a); }
 static inline __device__ float __cosf(float a) { return 
__nvvm_cos_approx_f(a); }
 static inline __device__ float __expf(float a) { return __nvvm_ex2_approx_f(a 
* (float)__builtin_log2(__builtin_exp(1))); }
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 22b0a0ca15..f02571c710 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -210,6 +210,8 @@ OBJS-$(CONFIG_CAS_FILTER)+= vf_cas.o
 OBJS-$(CONFIG_CHROMABER_VULKAN_FILTER)   += vf_chromaber_vulkan.o vulkan.o 
vulkan_filter.o
 OBJS-$(CONFIG_CHROMAHOLD_FILTER) += vf_chromakey.o
 OBJS-$(CONFIG_CHROMAKEY_FILTER)  += vf_chromakey.o
+OBJS-$(CONFIG_CHROMAKEY_CUDA_FILTER) += vf_chromakey_cuda.o  
vf_chromakey_cuda.ptx.o
+
 OBJS-$(CONFIG_CHROMANR_FILTER)   += vf_chromanr.o
 OBJS-$(CONFIG_CHROMASHIFT_FILTER)+= vf_chromashift.o
 OBJS-$(CONFIG_CIESCOPE_FILTER)   += vf_ciescope.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index ec70feef11..da1a96b23c 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -195,6 +195,7 @@ extern const AVFilter ff_vf_cas;
 extern const AVFilter ff_vf_chromaber_vulkan;
 extern const AVFilter ff_vf_chromahold;
 extern const AVFilter ff_vf_chromakey;
+extern const AVFilter ff_vf_chromakey_cuda;
 extern const AVFilter ff_vf_chromanr;
 extern const AVFilter ff_vf_chromashift;
 extern const AVFilter ff_vf_ciescope;
diff --git a/libavfilter/vf_chromakey_cuda.c b/libavfilter/vf_chromakey_cuda.c
new file mode 100644
index 00..15f02b262d
--- /dev/null
+++ b/libavfilter/vf_chromakey_cuda.c
@@ -0,0 +1,451 @@
+/*
+ * Copyright (c) 2022 Mohamed Khaled 
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include 
+#include 
+#include 
+
+#include "libavutil/avstring.h"
+#include "libavutil/common.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_cuda_internal.h"
+#include "libavutil/cuda_check.h"
+#include "libavutil/internal.h"
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+#include "video.h"
+#include "cuda/load_helper.h"
+
+#define FIXNUM(x) lrint((x) * (1 << 10))
+#define RGB_TO_U(rgb) (((-FIXNUM(0.16874) * rgb[0] - FIXNUM(0.33126) * rgb[1] 
+ FIXNUM(0.5) * rgb[2] + (1 << 9) - 1) >> 10) + 128)
+#define RGB_TO_V(rgb) (((FIXNUM(0.5) * rgb[0] - FIXNUM(0.41869) * rgb[1] - 
FIXNUM(0.08131) * rgb[2] + (1 << 9) - 1) >> 10) + 128)
+
+static const enum AVPixelFormat supported_formats[] = {
+AV_PIX_FM

[FFmpeg-devel] [PATCH v2] Chromakey CUDA

2022-07-05 Thread mohamed Elhadidy
From: Mohamed Khaled Mohamed 
<56936494+mohamedelhadidy0...@users.noreply.github.com>

V2 ChromeKey CUDA

Added Chromakey CUDA filter
v2 some changes for coding conventions

GSoC'22
Added CUDA chromakeyfilter
libavfilter/vf_chromakey_cuda.cu:the CUDA kernel for the filter
libavfilter/vf_chromakey_cuda.c: the C side that calls the kernel and gets user 
input
libavfilter/allfilters.c: added the filter to it
libavfilter/Makefile: added the filter to it
cuda/cuda_runtime.h: added two math CUDA functions that are used in the filter
---
 compat/cuda/cuda_runtime.h   |   2 +
 libavfilter/Makefile |   2 +
 libavfilter/allfilters.c |   1 +
 libavfilter/vf_chromakey_cuda.c  | 450 +++
 libavfilter/vf_chromakey_cuda.cu | 164 +++
 5 files changed, 619 insertions(+)
 create mode 100644 libavfilter/vf_chromakey_cuda.c
 create mode 100644 libavfilter/vf_chromakey_cuda.cu

diff --git a/compat/cuda/cuda_runtime.h b/compat/cuda/cuda_runtime.h
index 30cd085e48..5837c1ad37 100644
--- a/compat/cuda/cuda_runtime.h
+++ b/compat/cuda/cuda_runtime.h
@@ -181,7 +181,9 @@ static inline __device__ double trunc(double a) { return 
__builtin_trunc(a); }
 static inline __device__ float fabsf(float a) { return __builtin_fabsf(a); }
 static inline __device__ float fabs(float a) { return __builtin_fabsf(a); }
 static inline __device__ double fabs(double a) { return __builtin_fabs(a); }
+static inline __device__ float sqrtf(float a) { return __builtin_sqrtf(a); }
 
+static inline __device__ float __saturatef(float a) { return 
__nvvm_saturate_f(a); }
 static inline __device__ float __sinf(float a) { return 
__nvvm_sin_approx_f(a); }
 static inline __device__ float __cosf(float a) { return 
__nvvm_cos_approx_f(a); }
 static inline __device__ float __expf(float a) { return __nvvm_ex2_approx_f(a 
* (float)__builtin_log2(__builtin_exp(1))); }
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 22b0a0ca15..f02571c710 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -210,6 +210,8 @@ OBJS-$(CONFIG_CAS_FILTER)+= vf_cas.o
 OBJS-$(CONFIG_CHROMABER_VULKAN_FILTER)   += vf_chromaber_vulkan.o vulkan.o 
vulkan_filter.o
 OBJS-$(CONFIG_CHROMAHOLD_FILTER) += vf_chromakey.o
 OBJS-$(CONFIG_CHROMAKEY_FILTER)  += vf_chromakey.o
+OBJS-$(CONFIG_CHROMAKEY_CUDA_FILTER) += vf_chromakey_cuda.o  
vf_chromakey_cuda.ptx.o
+
 OBJS-$(CONFIG_CHROMANR_FILTER)   += vf_chromanr.o
 OBJS-$(CONFIG_CHROMASHIFT_FILTER)+= vf_chromashift.o
 OBJS-$(CONFIG_CIESCOPE_FILTER)   += vf_ciescope.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index ec70feef11..da1a96b23c 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -195,6 +195,7 @@ extern const AVFilter ff_vf_cas;
 extern const AVFilter ff_vf_chromaber_vulkan;
 extern const AVFilter ff_vf_chromahold;
 extern const AVFilter ff_vf_chromakey;
+extern const AVFilter ff_vf_chromakey_cuda;
 extern const AVFilter ff_vf_chromanr;
 extern const AVFilter ff_vf_chromashift;
 extern const AVFilter ff_vf_ciescope;
diff --git a/libavfilter/vf_chromakey_cuda.c b/libavfilter/vf_chromakey_cuda.c
new file mode 100644
index 00..b3b037d43a
--- /dev/null
+++ b/libavfilter/vf_chromakey_cuda.c
@@ -0,0 +1,450 @@
+/*
+ * Copyright (c) 2022 Mohamed Khaled 
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include 
+#include 
+#include 
+
+#include "libavutil/avstring.h"
+#include "libavutil/common.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_cuda_internal.h"
+#include "libavutil/cuda_check.h"
+#include "libavutil/internal.h"
+#include "libavutil/opt.h"
+#include "libavutil/pixdesc.h"
+
+#include "avfilter.h"
+#include "formats.h"
+#include "internal.h"
+#include "video.h"
+#include "cuda/load_helper.h"
+
+#define FIXNUM(x) lrint((x) * (1 << 10))
+#define RGB_TO_U(rgb) (((-FIXNUM(0.16874) * rgb[0] - FIXNUM(0.33126) * rgb[1] 
+ FIXNUM(0.5) * rgb[2] + (1 << 9) - 1) >> 10) + 128)
+#define RGB_TO_V(rgb) (((FIXNUM(0.5) * rgb[0] - FIXNUM(0.41869) * rgb[1] - 
FIXNUM(0.08131) * rgb[2] + (1 << 9) - 1) >> 10) + 128)
+
+static const enum AVPixelFormat supported_formats[] = {
+AV_PIX_FMT_YUV420P,
+  

[FFmpeg-devel] [PATCH] fftools/ffmpeg: log correct filter timebase with debug_ts

2022-07-05 Thread Timo Rothenpieler
---
 fftools/ffmpeg.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index e7384f052a..6ec28f3019 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -766,6 +766,7 @@ static double adjust_frame_pts_to_encoder_tb(OutputFile 
*of, OutputStream *ost,
 {
 double float_pts = AV_NOPTS_VALUE; // this is identical to frame.pts but 
with higher precision
 AVCodecContext *enc = ost->enc_ctx;
+AVRational filter_tb = (AVRational){ -1, -1 };
 if (!frame || frame->pts == AV_NOPTS_VALUE ||
 !enc || !ost->filter || !ost->filter->graph->graph)
 goto early_exit;
@@ -774,9 +775,9 @@ static double adjust_frame_pts_to_encoder_tb(OutputFile 
*of, OutputStream *ost,
 AVFilterContext *filter = ost->filter->filter;
 
 int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : 
of->start_time;
-AVRational filter_tb = av_buffersink_get_time_base(filter);
 AVRational tb = enc->time_base;
 int extra_bits = av_clip(29 - av_log2(tb.den), 0, 16);
+filter_tb = av_buffersink_get_time_base(filter);
 
 tb.den <<= extra_bits;
 float_pts =
@@ -798,8 +799,8 @@ early_exit:
frame ? av_ts2str(frame->pts) : "NULL",
frame ? av_ts2timestr(frame->pts, &enc->time_base) : "NULL",
float_pts,
-   enc ? enc->time_base.num : -1,
-   enc ? enc->time_base.den : -1);
+   filter_tb.num,
+   filter_tb.den);
 }
 
 return float_pts;
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avcodec: add Radiance HDR image format support

2022-07-05 Thread Paul B Mahol
Hello,

Patch attached.
From 62fce2bfd811eb4fb86b5907d62e67f0a2d033ff Mon Sep 17 00:00:00 2001
From: Paul B Mahol 
Date: Sun, 3 Jul 2022 23:50:05 +0200
Subject: [PATCH] avcodec: add Radiance HDR image format support

Signed-off-by: Paul B Mahol 
---
 doc/general_contents.texi |   2 +
 libavcodec/Makefile   |   2 +
 libavcodec/allcodecs.c|   2 +
 libavcodec/codec_desc.c   |   7 ++
 libavcodec/codec_id.h |   1 +
 libavcodec/hdrdec.c   | 212 ++
 libavcodec/hdrenc.c   | 188 +
 libavformat/Makefile  |   1 +
 libavformat/allformats.c  |   1 +
 libavformat/img2.c|   1 +
 libavformat/img2dec.c |   8 ++
 libavformat/img2enc.c |   2 +-
 12 files changed, 426 insertions(+), 1 deletion(-)
 create mode 100644 libavcodec/hdrdec.c
 create mode 100644 libavcodec/hdrenc.c

diff --git a/doc/general_contents.texi b/doc/general_contents.texi
index b1d3e3aa05..f25c784d3b 100644
--- a/doc/general_contents.texi
+++ b/doc/general_contents.texi
@@ -749,6 +749,8 @@ following image formats are supported:
 @tab OpenEXR
 @item FITS @tab X @tab X
 @tab Flexible Image Transport System
+@item HDR  @tab X @tab X
+@tab Radiance HDR RGBE Image format
 @item IMG  @tab   @tab X
 @tab GEM Raster image
 @item JPEG @tab X @tab X
diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 457ec58377..df7e227a7f 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -403,6 +403,8 @@ OBJS-$(CONFIG_HAP_DECODER) += hapdec.o hap.o
 OBJS-$(CONFIG_HAP_ENCODER) += hapenc.o hap.o
 OBJS-$(CONFIG_HCA_DECODER) += hcadec.o
 OBJS-$(CONFIG_HCOM_DECODER)+= hcom.o
+OBJS-$(CONFIG_HDR_DECODER) += hdrdec.o
+OBJS-$(CONFIG_HDR_ENCODER) += hdrenc.o
 OBJS-$(CONFIG_HEVC_DECODER)+= hevcdec.o hevc_mvs.o \
   hevc_cabac.o hevc_refs.o hevcpred.o\
   hevcdsp.o hevc_filter.o hevc_data.o \
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index bdfc2f6f45..31d2c5979c 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -469,6 +469,8 @@ extern const FFCodec ff_gsm_decoder;
 extern const FFCodec ff_gsm_ms_decoder;
 extern const FFCodec ff_hca_decoder;
 extern const FFCodec ff_hcom_decoder;
+extern const FFCodec ff_hdr_encoder;
+extern const FFCodec ff_hdr_decoder;
 extern const FFCodec ff_iac_decoder;
 extern const FFCodec ff_ilbc_decoder;
 extern const FFCodec ff_imc_decoder;
diff --git a/libavcodec/codec_desc.c b/libavcodec/codec_desc.c
index 44ad2d1fe8..eeea15b1ef 100644
--- a/libavcodec/codec_desc.c
+++ b/libavcodec/codec_desc.c
@@ -1893,6 +1893,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
 .long_name = NULL_IF_CONFIG_SMALL("PHM (Portable HalfFloatMap) image"),
 .props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
 },
+{
+.id= AV_CODEC_ID_HDR,
+.type  = AVMEDIA_TYPE_VIDEO,
+.name  = "hdr",
+.long_name = NULL_IF_CONFIG_SMALL("HDR (Radiance RGBE format) image"),
+.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY,
+},
 
 /* various PCM "codecs" */
 {
diff --git a/libavcodec/codec_id.h b/libavcodec/codec_id.h
index 81fb316cff..005efa6334 100644
--- a/libavcodec/codec_id.h
+++ b/libavcodec/codec_id.h
@@ -312,6 +312,7 @@ enum AVCodecID {
 AV_CODEC_ID_JPEGXL,
 AV_CODEC_ID_QOI,
 AV_CODEC_ID_PHM,
+AV_CODEC_ID_HDR,
 
 /* various PCM "codecs" */
 AV_CODEC_ID_FIRST_AUDIO = 0x1, ///< A dummy id pointing at the start of audio codecs
diff --git a/libavcodec/hdrdec.c b/libavcodec/hdrdec.c
new file mode 100644
index 00..2178a824bd
--- /dev/null
+++ b/libavcodec/hdrdec.c
@@ -0,0 +1,212 @@
+/*
+ * Radiance HDR image format
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include 
+
+#include "libavutil/imgutils.h"
+#include "avcodec.h"
+#include "internal.h"
+#include "bytestream.h"
+#include "codec_internal.h"
+#include "thread.h"
+
+#define MINELEN 8
+#define MAXELEN 0x7fff
+
+static int hdr_get_line(GetByteContext *gb, uint8_t *buffer, int size)
+{
+

[FFmpeg-devel] [PATCH v2][GSoC'22] Chromakey CUDA

2022-07-05 Thread mohamed Elhadidy
From: Mohamed Khaled Mohamed 
<56936494+mohamedelhadidy0...@users.noreply.github.com>

Chromakey CUDA

Added Chromakey CUDA filter
v2 some changes for coding conventions and add documentation

GSoC'22
Added CUDA chromakeyfilter
libavfilter/vf_chromakey_cuda.cu:the CUDA kernel for the filter
libavfilter/vf_chromakey_cuda.c: the C side that calls the kernel and gets user 
input
libavfilter/allfilters.c: added the filter to it
libavfilter/Makefile: added the filter to it
cuda/cuda_runtime.h: added two math CUDA functions that are used in the filter

Update filters.texi
---
 Changelog|   1 +
 compat/cuda/cuda_runtime.h   |   2 +
 doc/filters.texi |  27 ++
 libavfilter/Makefile |   2 +
 libavfilter/allfilters.c |   1 +
 libavfilter/vf_chromakey_cuda.c  | 450 +++
 libavfilter/vf_chromakey_cuda.cu | 164 +++
 7 files changed, 647 insertions(+)
 create mode 100644 libavfilter/vf_chromakey_cuda.c
 create mode 100644 libavfilter/vf_chromakey_cuda.cu

diff --git a/Changelog b/Changelog
index c39cc5087e..d5efe875d6 100644
--- a/Changelog
+++ b/Changelog
@@ -22,6 +22,7 @@ version 5.1:
 - ffprobe -o option
 - virtualbass audio filter
 - VDPAU AV1 hwaccel
+- added chromakey_cuda filter
 
 
 version 5.0:
diff --git a/compat/cuda/cuda_runtime.h b/compat/cuda/cuda_runtime.h
index 30cd085e48..5837c1ad37 100644
--- a/compat/cuda/cuda_runtime.h
+++ b/compat/cuda/cuda_runtime.h
@@ -181,7 +181,9 @@ static inline __device__ double trunc(double a) { return 
__builtin_trunc(a); }
 static inline __device__ float fabsf(float a) { return __builtin_fabsf(a); }
 static inline __device__ float fabs(float a) { return __builtin_fabsf(a); }
 static inline __device__ double fabs(double a) { return __builtin_fabs(a); }
+static inline __device__ float sqrtf(float a) { return __builtin_sqrtf(a); }
 
+static inline __device__ float __saturatef(float a) { return 
__nvvm_saturate_f(a); }
 static inline __device__ float __sinf(float a) { return 
__nvvm_sin_approx_f(a); }
 static inline __device__ float __cosf(float a) { return 
__nvvm_cos_approx_f(a); }
 static inline __device__ float __expf(float a) { return __nvvm_ex2_approx_f(a 
* (float)__builtin_log2(__builtin_exp(1))); }
diff --git a/doc/filters.texi b/doc/filters.texi
index e525e87b3c..19290228ec 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -8651,6 +8651,33 @@ ffmpeg -f lavfi -i color=c=black:s=1280x720 -i video.mp4 
-shortest -filter_compl
 @end example
 @end itemize
 
+@section chromakey_cuda
+YUV colorspace color/chroma keying.
+This filter works like normal chromakey filter but operates on CUDA frames.
+for more details and onput parameters see @ref{chromakey}.
+@subsection Commands
+This filter supports same @ref{commands} as options.
+The command accepts the same syntax of the corresponding option.
+
+@subsection Examples
+
+@itemize
+@item
+Make all the green pixels in the input video transparent and use it as an 
overlay for another video.
+@example
+./ffmpeg -v verbose \
+-hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4  \
+-hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
+-init_hw_device cuda \
+-filter_complex \
+" \
+[0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video];
+[1:v]scale_cuda=format=yuv420p[base];
+[base][overlay_video]overlay_cuda" \
+-an -sn -c:v h264_nvenc -cq 20 output.mp4
+@end example
+@end itemize
+
 @section chromanr
 Reduce chrominance noise.
 
diff --git a/libavfilter/Makefile b/libavfilter/Makefile
index 22b0a0ca15..f02571c710 100644
--- a/libavfilter/Makefile
+++ b/libavfilter/Makefile
@@ -210,6 +210,8 @@ OBJS-$(CONFIG_CAS_FILTER)+= vf_cas.o
 OBJS-$(CONFIG_CHROMABER_VULKAN_FILTER)   += vf_chromaber_vulkan.o vulkan.o 
vulkan_filter.o
 OBJS-$(CONFIG_CHROMAHOLD_FILTER) += vf_chromakey.o
 OBJS-$(CONFIG_CHROMAKEY_FILTER)  += vf_chromakey.o
+OBJS-$(CONFIG_CHROMAKEY_CUDA_FILTER) += vf_chromakey_cuda.o  
vf_chromakey_cuda.ptx.o
+
 OBJS-$(CONFIG_CHROMANR_FILTER)   += vf_chromanr.o
 OBJS-$(CONFIG_CHROMASHIFT_FILTER)+= vf_chromashift.o
 OBJS-$(CONFIG_CIESCOPE_FILTER)   += vf_ciescope.o
diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c
index ec70feef11..da1a96b23c 100644
--- a/libavfilter/allfilters.c
+++ b/libavfilter/allfilters.c
@@ -195,6 +195,7 @@ extern const AVFilter ff_vf_cas;
 extern const AVFilter ff_vf_chromaber_vulkan;
 extern const AVFilter ff_vf_chromahold;
 extern const AVFilter ff_vf_chromakey;
+extern const AVFilter ff_vf_chromakey_cuda;
 extern const AVFilter ff_vf_chromanr;
 extern const AVFilter ff_vf_chromashift;
 extern const AVFilter ff_vf_ciescope;
diff --git a/libavfilter/vf_chromakey_cuda.c b/libavfilter/vf_chromakey_cuda.c
new file mode 100644
index 00..b3b037d43a
--- /dev/null
+++ b/libavfilter/vf_chromakey_cuda.c
@@ -0,0 +1,450 @@
+/*
+ * Copyright (c) 2022 Mohamed Khaled 
+ *
+ * This file is 

Re: [FFmpeg-devel] [PATCH v2 2/2] ffmpeg: add option -isync

2022-07-05 Thread Anton Khirnov
Quoting Gyan Doshi (2022-07-04 10:20:22)
> 
> 
> On 2022-07-04 11:51 am, Anton Khirnov wrote:
> > Quoting Gyan Doshi (2022-07-02 11:51:53)
> >>
> >> On 2022-07-02 02:12 pm, Anton Khirnov wrote:
> >>> Quoting Gyan Doshi (2022-07-01 13:03:04)
>  On 2022-07-01 03:33 pm, Anton Khirnov wrote:
> > Quoting Gyan Doshi (2022-06-25 10:29:51)
> >> This is a per-file input option that adjusts an input's timestamps
> >> with reference to another input, so that emitted packet timestamps
> >> account for the difference between the start times of the two inputs.
> >>
> >> Typical use case is to sync two or more live inputs such as from 
> >> capture
> >> devices. Both the target and reference input source timestamps should 
> >> be
> >> based on the same clock source.
> > If both streams are using the same clock, then why is any extra
> > synchronization needed?
>  Because ffmpeg.c normalizes timestamps by default. We can keep
>  timestamps using -copyts, but these inputs are usually preprocessed
>  using single-input filters which won't have access to the reference
>  inputs,
> >>> No idea what you mean by "reference inputs" here.
> >> The reference input is the one the target is being synced against. e.g.
> >> in a karaoke session -  the music track from a DAW would be ref and the
> >> user's voice via mic is the target.
> >>
>  or the merge filters like e.g. amix don't sync by timestamp.
> >>> amix does seem to look at timestamps.
> >> amix does not *sync* by timestamp. If one input starts at 4 and the
> >> other at 7, the 2nd isn't aligned by timestamp.
> > So maybe it should?
> >
> > My concern generally with this patchset is that it seems like you're
> > changing things where it's easier to do rather than where it's correct.
> 
> There are many multi=input filters which may be used. amix is just one 
> example.
> 
> The basic 'deficiency' here is that filters operate upon frames and only 
> look at single frames for the most part, even though frames are part of 
> streams. These streams may have companion streams (which may be part of 
> programs) which are part of a single input. These inputs may have 
> companion inputs.  Anything in this tree may be relevant for a 
> particular operation as a reference, e.g. we have a bespoke filter 
> scale2ref so that we can look at another stream's frames. But we don't 
> have pad2ref, crop2ref ..etc.  So, the absolutely correct thing to do 
> would be to supply a global context to processing modules like 
> filtergraphs , maybe an array of dicts, containing attributes of all 
> inputs like starting time stamps, resolution, string metadata..etc. That 
> would obviate need for these bespoke fields and even filters.

I don't see how the second paragraph relates to the first one. scale,
pad, or crop are not multi-input filters, so why are you comparing them
to amix? I don't think there are so many multi-input filters in lavfi,
and the issue should be solvable using the same code for all of them.

Since both relevant streams are visible to the filter, no global context
of any kind should be needed.

> 
> But that's a much larger design undertaking and I'm just addressing one 
> specific practical need here. This patch is currently being used 
> successfully by commercial users in a private build. Many users have 
> posted to ffmpeg-users and popular forums over the years asking for 
> something that achieves this.
> 
> Actually, this functionality sounds like it sort of existed earlier in 
> the form of map sync (i.e. -map 1:a,0:a:1). Although the assignment 
> syntax still remains (and doesn't warn/error out),  it's a no-op now 
> since the application code was removed in 2012 by Michael, who said he 
> based it off an idea from one of your commits, presumably in Libav.

So why are you not restoring that functionality and adding a new option
instead?

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avutil/hwcontext_d3d11va: add BGRA/RGBA10 formats support

2022-07-05 Thread Timo Rothenpieler
Desktop duplication outputs those
---
 libavutil/hwcontext_d3d11va.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/libavutil/hwcontext_d3d11va.c b/libavutil/hwcontext_d3d11va.c
index 904d14bbc8..1f3d1b7755 100644
--- a/libavutil/hwcontext_d3d11va.c
+++ b/libavutil/hwcontext_d3d11va.c
@@ -86,6 +86,8 @@ static const struct {
 } supported_formats[] = {
 { DXGI_FORMAT_NV12, AV_PIX_FMT_NV12 },
 { DXGI_FORMAT_P010, AV_PIX_FMT_P010 },
+{ DXGI_FORMAT_B8G8R8A8_UNORM,AV_PIX_FMT_BGRA },
+{ DXGI_FORMAT_R10G10B10A2_UNORM, AV_PIX_FMT_X2BGR10 },
 // Special opaque formats. The pix_fmt is merely a place holder, as the
 // opaque format cannot be accessed directly.
 { DXGI_FORMAT_420_OPAQUE,   AV_PIX_FMT_YUV420P },
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avcodec/aacdec: fix parsing of dual mono files

2022-07-05 Thread James Almer
Dual mono files report a channel count of 2 with each individual channel in its
own SCE, instead of both in a single CPE as is the case with standard stereo.
This commit handles this non default channel configuration scenario.

Fixes ticket #1614

Signed-off-by: James Almer 
---
 libavcodec/aacdec_template.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/libavcodec/aacdec_template.c b/libavcodec/aacdec_template.c
index 0135cb35df..10fba3d3b2 100644
--- a/libavcodec/aacdec_template.c
+++ b/libavcodec/aacdec_template.c
@@ -701,14 +701,15 @@ static ChannelElement *get_che(AACContext *ac, int type, 
int elem_id)
 
 av_log(ac->avctx, AV_LOG_DEBUG, "stereo with SCE\n");
 
-if (set_default_channel_config(ac, ac->avctx, layout_map,
-   &layout_map_tags, 1) < 0)
-return NULL;
+layout_map_tags = 2;
+layout_map[0][0] = layout_map[1][0] = TYPE_SCE;
+layout_map[0][2] = layout_map[1][2] = AAC_CHANNEL_FRONT;
+layout_map[0][1] = 0;
+layout_map[1][1] = 1;
 if (output_configure(ac, layout_map, layout_map_tags,
  OC_TRIAL_FRAME, 1) < 0)
 return NULL;
 
-ac->oc[1].m4ac.chan_config = 1;
 if (ac->oc[1].m4ac.sbr)
 ac->oc[1].m4ac.ps = -1;
 }
@@ -786,8 +787,10 @@ static ChannelElement *get_che(AACContext *ac, int type, 
int elem_id)
 type == TYPE_CPE) {
 ac->tags_mapped++;
 return ac->tag_che_map[TYPE_CPE][elem_id] = ac->che[TYPE_CPE][0];
-} else if (ac->oc[1].m4ac.chan_config == 2) {
-return NULL;
+} else if (ac->tags_mapped == 1 && ac->oc[1].m4ac.chan_config == 2 &&
+type == TYPE_SCE) {
+ac->tags_mapped++;
+return ac->tag_che_map[TYPE_SCE][elem_id] = ac->che[TYPE_SCE][1];
 }
 case 1:
 if (!ac->tags_mapped && type == TYPE_SCE) {
-- 
2.37.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avutil/hwcontext_d3d11va: add BGRA/RGBA10 formats support

2022-07-05 Thread Soft Works



> -Original Message-
> From: ffmpeg-devel  On Behalf Of
> Timo Rothenpieler
> Sent: Tuesday, July 5, 2022 6:15 PM
> To: ffmpeg-devel@ffmpeg.org
> Cc: Timo Rothenpieler 
> Subject: [FFmpeg-devel] [PATCH] avutil/hwcontext_d3d11va: add
> BGRA/RGBA10 formats support
> 
> Desktop duplication outputs those
> ---
>  libavutil/hwcontext_d3d11va.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/libavutil/hwcontext_d3d11va.c
> b/libavutil/hwcontext_d3d11va.c
> index 904d14bbc8..1f3d1b7755 100644
> --- a/libavutil/hwcontext_d3d11va.c
> +++ b/libavutil/hwcontext_d3d11va.c
> @@ -86,6 +86,8 @@ static const struct {
>  } supported_formats[] = {
>  { DXGI_FORMAT_NV12, AV_PIX_FMT_NV12 },
>  { DXGI_FORMAT_P010, AV_PIX_FMT_P010 },
> +{ DXGI_FORMAT_B8G8R8A8_UNORM,AV_PIX_FMT_BGRA },
> +{ DXGI_FORMAT_R10G10B10A2_UNORM, AV_PIX_FMT_X2BGR10 },
>  // Special opaque formats. The pix_fmt is merely a place holder,
> as the
>  // opaque format cannot be accessed directly.
>  { DXGI_FORMAT_420_OPAQUE,   AV_PIX_FMT_YUV420P },
> --

LGTM - at least I can say that for the first one, which I have
for many years already. My current list at this place is this:

static const struct {
DXGI_FORMAT d3d_format;
enum AVPixelFormat pix_fmt;
} supported_formats[] = {
{ DXGI_FORMAT_NV12,AV_PIX_FMT_NV12 },
{ DXGI_FORMAT_P010,AV_PIX_FMT_P010 },
{ DXGI_FORMAT_Y210,AV_PIX_FMT_Y210 },
{ DXGI_FORMAT_P8,  AV_PIX_FMT_PAL8 },
{ DXGI_FORMAT_B8G8R8A8_UNORM,  AV_PIX_FMT_BGRA },
{ DXGI_FORMAT_P016,AV_PIX_FMT_P016 },
{ DXGI_FORMAT_YUY2,AV_PIX_FMT_YUYV422 },
// Special opaque formats. The pix_fmt is merely a place holder, as the
// opaque format cannot be accessed directly.
{ DXGI_FORMAT_420_OPAQUE,  AV_PIX_FMT_YUV420P },
};

Best regards,
softworkz
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] Make execute() and execute2() return FFMIN() of thread return codes

2022-07-05 Thread Anton Khirnov
Quoting Tomas Härdin (2022-07-04 12:46:04)
> lör 2022-07-02 klockan 11:43 +0200 skrev Anton Khirnov:
> > Quoting Tomas Härdin (2022-06-30 14:42:42)
> > > Hi
> > > 
> > > Previous version of this patch failed fate-fic-avi with
> > > THREAD_TYPE=slice THREADS=2 due to thread_execute() always
> > > returning 0.
> > > Fixed in this version.
> > > 
> > > The fic sample appears to indeed be broken. Some packets are
> > > truncated,
> > > including one zero-length packet.
> > 
> > maybe mention this fact for posterity above the relevant test in
> > tests/fate/scren.mak
> 
> Sure
> 
> > 
> > > diff --git a/libavutil/slicethread.h b/libavutil/slicethread.h
> > > index f6f6f302c4..5c8f197932 100644
> > > --- a/libavutil/slicethread.h
> > > +++ b/libavutil/slicethread.h
> > > @@ -31,8 +31,8 @@ typedef struct AVSliceThread AVSliceThread;
> > >   * @return return number of threads or negative AVERROR on failure
> > >   */
> > >  int avpriv_slicethread_create(AVSliceThread **pctx, void *priv,
> > > -  void (*worker_func)(void *priv, int
> > > jobnr, int threadnr, int nb_jobs, int nb_threads),
> > > -  void (*main_func)(void *priv),
> > > +  int (*worker_func)(void *priv, int
> > > jobnr, int threadnr, int nb_jobs, int nb_threads),
> > > +  int (*main_func)(void *priv),
> > 
> > This is an ABI break.
> > 
> 
> You're right. I was under the impression that avpriv functions could be
> changed more freely but obviously not when they're shared between
> libraries.
> 
> This could be worked around with a new function called say
> avpriv_slicethread_create2() and a minor bump. Another approach is to
> av_fast_malloc() an array for rets when none is given. The number of
> allocations would thereby be quite limited.

Adding a new function is ok I suppose.

Though longer-term I'd like to see that API cleaned up and made properly
public.

> I'd like to see all users of execute() and execute2() start checking
> their return values, and making that easier to do is another step
> toward more correctness of the code

I'm all in favor of that.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] avformat/mov: Support parsing of still AVIF Alpha Channel

2022-07-05 Thread Anton Khirnov
Quoting Vignesh Venkatasubramanian (2022-07-02 23:15:35)
> > As for encoding, not fully sure how it should be integrated, if any
> > encoders actually at this moment do proper alpha coding, or do all API
> > clients have to separately encode with one context the primary image,
> > and the alpha with another?
> 
> I am not sure about other codecs, but in the case of AVIF/AV1, the
> encoder does not understand/support alpha channels. The only way to do
> it is to use two separate encoders.

Can this not be handled in our encoder wrapper?

We should strive as much as possible to shield our callers from
codec-specific implementation details.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] fftools/ffmpeg_filter: Fix audio_drift_threshold check

2022-07-05 Thread Anton Khirnov
Quoting Michael Niedermayer (2022-07-02 14:56:50)
> On Sat, Jul 02, 2022 at 10:38:26AM +0200, Anton Khirnov wrote:
> > Quoting Michael Niedermayer (2022-07-01 21:53:00)
> > > On Thu, Jun 30, 2022 at 10:55:46AM +0200, Anton Khirnov wrote:
> > > > Variant 2 is less bad, but the whole check seems hacky to me, since it
> > > > seems to make assumptions about swr defaults
> > > > 
> > > > Won't setting this unconditionally have the same effect?
> > > 
> > > it has the same effect but its not so nice to the user to recommand extra
> > > arguments which make no difference
> > 
> > Sorry, I don't follow. What is recommending any arguments to the user
> > here?
> 
> i meant this thing here:
> ./ffmpeg   -i matrixbench_mpeg2.mpg -async 1 -f null -  2>&1 | grep async
> 
> -async is forwarded to lavfi similarly to -af 
> aresample=async=1:min_hard_comp=0.10:first_pts=0.
> 
> vs.
> 
> -async is forwarded to lavfi similarly to -af aresample=async=1:first_pts=0.

I don't see a problem - why would the user care how it is forwarded?

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavu: always provide symbols from hwcontext_vulkan.h

2022-07-05 Thread Michael Niedermayer
On Tue, Jul 05, 2022 at 02:11:01AM +0200, Niklas Haas wrote:
> From: Niklas Haas 
> 
> This header is unconditionally installed, even though the utility
> functions defined by it may be missing from the built library.
> 
> A precedent set by e.g. libavcodec/qsv.h (and others) is to always
> provide these functions by compiling stub functions in the absence of
> CONFIG_*. Make hwcontext_vulkan.h match this convention.
> 
> Fixes downstream issues, e.g.
> https://github.com/haasn/libplacebo/issues/120
> 
> Signed-off-by: Niklas Haas 
> ---
>  libavutil/Makefile   |  2 +-
>  libavutil/hwcontext_vulkan.c | 26 --
>  2 files changed, 25 insertions(+), 3 deletions(-)

breaks build with shared libs

LD  libavutil/libavutil.so.57
libavutil/hwcontext_vulkan.o: In function `av_vk_frame_alloc':
ffmpeg/linux64shared/src/libavutil/hwcontext_vulkan.c:4177: multiple definition 
of `av_vk_frame_alloc'
libavutil/hwcontext_stub.o:ffmpeg/linux64shared/src/libavutil/hwcontext_stub.c:37:
 first defined here
libavutil/hwcontext_vulkan.o: In function `av_vkfmt_from_pixfmt':
ffmpeg/linux64shared/src/libavutil/hwcontext_vulkan.c:4182: multiple definition 
of `av_vkfmt_from_pixfmt'
libavutil/hwcontext_stub.o:ffmpeg/linux64shared/src/libavutil/hwcontext_stub.c:32:
 first defined here
clang: error: linker command failed with exit code 1 (use -v to see invocation)
ffmpeg/ffbuild/library.mak:118: recipe for target 'libavutil/libavutil.so.57' 
failed
make: *** [libavutil/libavutil.so.57] Error 1


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Breaking DRM is a little like attempting to break through a door even
though the window is wide open and the only thing in the house is a bunch
of things you dont want and which you would get tomorrow for free anyway


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 2/2] ffmpeg: add option -isync

2022-07-05 Thread Gyan Doshi



On 2022-07-05 09:45 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-04 10:20:22)


On 2022-07-04 11:51 am, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-02 11:51:53)

On 2022-07-02 02:12 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-01 13:03:04)

On 2022-07-01 03:33 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-06-25 10:29:51)

This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.

Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.

If both streams are using the same clock, then why is any extra
synchronization needed?

Because ffmpeg.c normalizes timestamps by default. We can keep
timestamps using -copyts, but these inputs are usually preprocessed
using single-input filters which won't have access to the reference
inputs,

No idea what you mean by "reference inputs" here.

The reference input is the one the target is being synced against. e.g.
in a karaoke session -  the music track from a DAW would be ref and the
user's voice via mic is the target.


or the merge filters like e.g. amix don't sync by timestamp.

amix does seem to look at timestamps.

amix does not *sync* by timestamp. If one input starts at 4 and the
other at 7, the 2nd isn't aligned by timestamp.

So maybe it should?

My concern generally with this patchset is that it seems like you're
changing things where it's easier to do rather than where it's correct.

There are many multi=input filters which may be used. amix is just one
example.

The basic 'deficiency' here is that filters operate upon frames and only
look at single frames for the most part, even though frames are part of
streams. These streams may have companion streams (which may be part of
programs) which are part of a single input. These inputs may have
companion inputs.  Anything in this tree may be relevant for a
particular operation as a reference, e.g. we have a bespoke filter
scale2ref so that we can look at another stream's frames. But we don't
have pad2ref, crop2ref ..etc.  So, the absolutely correct thing to do
would be to supply a global context to processing modules like
filtergraphs , maybe an array of dicts, containing attributes of all
inputs like starting time stamps, resolution, string metadata..etc. That
would obviate need for these bespoke fields and even filters.

I don't see how the second paragraph relates to the first one. scale,
pad, or crop are not multi-input filters, so why are you comparing them


scale is a singe-input filter but scale2ref is a multi-input filter 
which is needed solely because there is no means at present to convey 
info about other streams to a single input filter.
Similarly, we would need a crop2ref, pad2ref..etc to achieve the same 
attribute transfer.  If we had a global context, these counterpart 
filters wouldn't be necessary.




to amix? I don't think there are so many multi-input filters in lavfi,
and the issue should be solvable using the same code for all of them.


Because reference about other streams isn't helpful only at the point of 
multi-filter use. One of the streams may want to be resampled to a 
specific rate or sample format based on some user's logic instead of 
letting amix choose one. That's where a global context would help.



Actually, this functionality sounds like it sort of existed earlier in
the form of map sync (i.e. -map 1:a,0:a:1). Although the assignment
syntax still remains (and doesn't warn/error out),  it's a no-op now
since the application code was removed in 2012 by Michael, who said he
based it off an idea from one of your commits, presumably in Libav.

So why are you not restoring that functionality and adding a new option
instead?


I said 'sort of'. That adjustment was implemented in do_video/audio_out, 
so it won't help in filtering, or streamcopying.
This current option adjusts just after demux, so it doesn't have those 
limitations.


Regards,
Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec: add Radiance HDR image format support

2022-07-05 Thread Anton Khirnov
Quoting Paul B Mahol (2022-07-05 17:45:20)
> Hello,
> 
> Patch attached.
> 
> From 62fce2bfd811eb4fb86b5907d62e67f0a2d033ff Mon Sep 17 00:00:00 2001
> From: Paul B Mahol 
> Date: Sun, 3 Jul 2022 23:50:05 +0200
> Subject: [PATCH] avcodec: add Radiance HDR image format support
> 
> Signed-off-by: Paul B Mahol 
> ---
>  doc/general_contents.texi |   2 +
>  libavcodec/Makefile   |   2 +
>  libavcodec/allcodecs.c|   2 +
>  libavcodec/codec_desc.c   |   7 ++
>  libavcodec/codec_id.h |   1 +
>  libavcodec/hdrdec.c   | 212 ++
>  libavcodec/hdrenc.c   | 188 +
>  libavformat/Makefile  |   1 +
>  libavformat/allformats.c  |   1 +
>  libavformat/img2.c|   1 +
>  libavformat/img2dec.c |   8 ++
>  libavformat/img2enc.c |   2 +-
>  12 files changed, 426 insertions(+), 1 deletion(-)
>  create mode 100644 libavcodec/hdrdec.c
>  create mode 100644 libavcodec/hdrenc.c

without this codec having tests the time cube cannot be completed

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] lavu: always provide symbols from hwcontext_vulkan.h

2022-07-05 Thread Andreas Rheinhardt
Michael Niedermayer:
> On Tue, Jul 05, 2022 at 02:11:01AM +0200, Niklas Haas wrote:
>> From: Niklas Haas 
>>
>> This header is unconditionally installed, even though the utility
>> functions defined by it may be missing from the built library.
>>
>> A precedent set by e.g. libavcodec/qsv.h (and others) is to always
>> provide these functions by compiling stub functions in the absence of
>> CONFIG_*. Make hwcontext_vulkan.h match this convention.
>>
>> Fixes downstream issues, e.g.
>> https://github.com/haasn/libplacebo/issues/120
>>
>> Signed-off-by: Niklas Haas 
>> ---
>>  libavutil/Makefile   |  2 +-
>>  libavutil/hwcontext_vulkan.c | 26 --
>>  2 files changed, 25 insertions(+), 3 deletions(-)
> 
> breaks build with shared libs
> 
> LDlibavutil/libavutil.so.57
> libavutil/hwcontext_vulkan.o: In function `av_vk_frame_alloc':
> ffmpeg/linux64shared/src/libavutil/hwcontext_vulkan.c:4177: multiple 
> definition of `av_vk_frame_alloc'
> libavutil/hwcontext_stub.o:ffmpeg/linux64shared/src/libavutil/hwcontext_stub.c:37:
>  first defined here
> libavutil/hwcontext_vulkan.o: In function `av_vkfmt_from_pixfmt':
> ffmpeg/linux64shared/src/libavutil/hwcontext_vulkan.c:4182: multiple 
> definition of `av_vkfmt_from_pixfmt'
> libavutil/hwcontext_stub.o:ffmpeg/linux64shared/src/libavutil/hwcontext_stub.c:32:
>  first defined here
> clang: error: linker command failed with exit code 1 (use -v to see 
> invocation)
> ffmpeg/ffbuild/library.mak:118: recipe for target 'libavutil/libavutil.so.57' 
> failed
> make: *** [libavutil/libavutil.so.57] Error 1
> 
> 

This commit has been superseded by
f9dd8fcf9b87e757096de993dd32571c4a85a2cb (which fixes the issue in a
different way and together with this patch causes the issue you
encountered).

- Andreas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 2/2] ffmpeg: add option -isync

2022-07-05 Thread Anton Khirnov
Quoting Gyan Doshi (2022-07-05 19:10:33)
> 
> 
> On 2022-07-05 09:45 pm, Anton Khirnov wrote:
> > Quoting Gyan Doshi (2022-07-04 10:20:22)
> >>
> >> On 2022-07-04 11:51 am, Anton Khirnov wrote:
> >>> Quoting Gyan Doshi (2022-07-02 11:51:53)
>  On 2022-07-02 02:12 pm, Anton Khirnov wrote:
> > Quoting Gyan Doshi (2022-07-01 13:03:04)
> >> On 2022-07-01 03:33 pm, Anton Khirnov wrote:
> >>> Quoting Gyan Doshi (2022-06-25 10:29:51)
>  This is a per-file input option that adjusts an input's timestamps
>  with reference to another input, so that emitted packet timestamps
>  account for the difference between the start times of the two inputs.
> 
>  Typical use case is to sync two or more live inputs such as from 
>  capture
>  devices. Both the target and reference input source timestamps 
>  should be
>  based on the same clock source.
> >>> If both streams are using the same clock, then why is any extra
> >>> synchronization needed?
> >> Because ffmpeg.c normalizes timestamps by default. We can keep
> >> timestamps using -copyts, but these inputs are usually preprocessed
> >> using single-input filters which won't have access to the reference
> >> inputs,
> > No idea what you mean by "reference inputs" here.
>  The reference input is the one the target is being synced against. e.g.
>  in a karaoke session -  the music track from a DAW would be ref and the
>  user's voice via mic is the target.
> 
> >> or the merge filters like e.g. amix don't sync by timestamp.
> > amix does seem to look at timestamps.
>  amix does not *sync* by timestamp. If one input starts at 4 and the
>  other at 7, the 2nd isn't aligned by timestamp.
> >>> So maybe it should?
> >>>
> >>> My concern generally with this patchset is that it seems like you're
> >>> changing things where it's easier to do rather than where it's correct.
> >> There are many multi=input filters which may be used. amix is just one
> >> example.
> >>
> >> The basic 'deficiency' here is that filters operate upon frames and only
> >> look at single frames for the most part, even though frames are part of
> >> streams. These streams may have companion streams (which may be part of
> >> programs) which are part of a single input. These inputs may have
> >> companion inputs.  Anything in this tree may be relevant for a
> >> particular operation as a reference, e.g. we have a bespoke filter
> >> scale2ref so that we can look at another stream's frames. But we don't
> >> have pad2ref, crop2ref ..etc.  So, the absolutely correct thing to do
> >> would be to supply a global context to processing modules like
> >> filtergraphs , maybe an array of dicts, containing attributes of all
> >> inputs like starting time stamps, resolution, string metadata..etc. That
> >> would obviate need for these bespoke fields and even filters.
> > I don't see how the second paragraph relates to the first one. scale,
> > pad, or crop are not multi-input filters, so why are you comparing them
> 
> scale is a singe-input filter but scale2ref is a multi-input filter 
> which is needed solely because there is no means at present to convey 
> info about other streams to a single input filter.
> Similarly, we would need a crop2ref, pad2ref..etc to achieve the same 
> attribute transfer.  If we had a global context, these counterpart 
> filters wouldn't be necessary.

In my experience, global *anything* is almost always a sign of bad
design and only leads to pain and suffering. The proper solution in this
case would be making the filtergraph construction API more flexible.
Then the code that actually has all the necessary information (i.e.
ffmpeg.c or other library caller) would set the filter parameters
however you want. Then none of these whatever2ref hacks would be needed.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 1/8] avutil/mem: Handle fast allocations near UINT_MAX properly

2022-07-05 Thread Andreas Rheinhardt
av_fast_realloc and av_fast_mallocz? store the size of
the objects they allocate in an unsigned. Yet they overallocate
and currently they can allocate more than UINT_MAX bytes
in case a user has requested a size of about UINT_MAX * 16 / 17
or more if SIZE_MAX > UINT_MAX. In this case it is impossible
to store the true size of the buffer via the unsigned*;
future requests are likely to use the (re)allocation codepath
even if the buffer is actually large enough because of
the incorrect size.

Fix this by ensuring that the actually allocated size
always fits into an unsigned. (This entails erroring out
in case the user requested more than UINT_MAX.)

Signed-off-by: Andreas Rheinhardt 
---
 libavutil/mem.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/libavutil/mem.c b/libavutil/mem.c
index a0c9a42849..18aff5291f 100644
--- a/libavutil/mem.c
+++ b/libavutil/mem.c
@@ -510,6 +510,8 @@ void *av_fast_realloc(void *ptr, unsigned int *size, size_t 
min_size)
 return ptr;
 
 max_size = atomic_load_explicit(&max_alloc_size, memory_order_relaxed);
+/* *size is an unsigned, so the real maximum is <= UINT_MAX. */
+max_size = FFMIN(max_size, UINT_MAX);
 
 if (min_size > max_size) {
 *size = 0;
@@ -542,6 +544,8 @@ static inline void fast_malloc(void *ptr, unsigned int 
*size, size_t min_size, i
 }
 
 max_size = atomic_load_explicit(&max_alloc_size, memory_order_relaxed);
+/* *size is an unsigned, so the real maximum is <= UINT_MAX. */
+max_size = FFMIN(max_size, UINT_MAX);
 
 if (min_size > max_size) {
 av_freep(ptr);
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 2/8] avformat/flvenc: Add deinit function

2022-07-05 Thread Andreas Rheinhardt
Fixes memleaks when the trailer is never written or when shift_data()
fails when writing the trailer.

Signed-off-by: Andreas Rheinhardt 
---
This is the same as
https://patchwork.ffmpeg.org/project/ffmpeg/patch/20191023125944.10292-4-andreas.rheinha...@gmail.com/

 libavformat/flvenc.c | 30 --
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/libavformat/flvenc.c b/libavformat/flvenc.c
index 4e65ba4066..b6135b25bf 100644
--- a/libavformat/flvenc.c
+++ b/libavformat/flvenc.c
@@ -733,7 +733,7 @@ static int flv_write_trailer(AVFormatContext *s)
 int64_t cur_pos = avio_tell(s->pb);
 
 if (build_keyframes_idx) {
-FLVFileposition *newflv_posinfo, *p;
+FLVFileposition *newflv_posinfo;
 
 avio_seek(pb, flv->videosize_offset, SEEK_SET);
 put_amf_double(pb, flv->videosize);
@@ -768,19 +768,6 @@ static int flv_write_trailer(AVFormatContext *s)
 put_amf_double(pb, newflv_posinfo->keyframe_timestamp);
 }
 
-newflv_posinfo = flv->head_filepositions;
-while (newflv_posinfo) {
-p = newflv_posinfo->next;
-if (p) {
-newflv_posinfo->next = p->next;
-av_free(p);
-p = NULL;
-} else {
-av_free(newflv_posinfo);
-newflv_posinfo = NULL;
-}
-}
-
 put_amf_string(pb, "");
 avio_w8(pb, AMF_END_OF_OBJECT);
 
@@ -1047,6 +1034,20 @@ static int flv_check_bitstream(AVFormatContext *s, 
AVStream *st,
 return ret;
 }
 
+static void flv_deinit(AVFormatContext *s)
+{
+FLVContext *flv = s->priv_data;
+FLVFileposition *filepos = flv->head_filepositions;
+
+while (filepos) {
+FLVFileposition *next = filepos->next;
+av_free(filepos);
+filepos = next;
+}
+flv->filepositions = flv->head_filepositions = NULL;
+flv->filepositions_count = 0;
+}
+
 static const AVOption options[] = {
 { "flvflags", "FLV muxer flags", offsetof(FLVContext, flags), 
AV_OPT_TYPE_FLAGS, {.i64 = 0}, INT_MIN, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, 
"flvflags" },
 { "aac_seq_header_detect", "Put AAC sequence header based on stream data", 
0, AV_OPT_TYPE_CONST, {.i64 = FLV_AAC_SEQ_HEADER_DETECT}, INT_MIN, INT_MAX, 
AV_OPT_FLAG_ENCODING_PARAM, "flvflags" },
@@ -1076,6 +1077,7 @@ const AVOutputFormat ff_flv_muxer = {
 .write_header   = flv_write_header,
 .write_packet   = flv_write_packet,
 .write_trailer  = flv_write_trailer,
+.deinit = flv_deinit,
 .check_bitstream= flv_check_bitstream,
 .codec_tag  = (const AVCodecTag* const []) {
   flv_video_codec_ids, flv_audio_codec_ids, 0
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 3/8] avutil/mem: Add av_fast_realloc_array()

2022-07-05 Thread Andreas Rheinhardt
From: Andreas Rheinhardt 

This is an array-equivalent of av_fast_realloc(). Its advantages
compared to using av_fast_realloc() for allocating arrays are as
follows:

a) It performs its own overflow checks for the multiplication that is
implicit in array allocations. (And it only needs to perform these
checks (as well as the multiplication itself) in case the array needs to
be reallocated.)
b) It allows to limit the number of elements to an upper bound given
by the caller. This allows to restrict the number of allocated elements
to fit into an int and therefore makes this function usable with
counters of this type. It can also be used to avoid overflow checks in
the caller: E.g. setting it to UINT_MAX - 1 elements makes it safe to
increase the desired number of elements in steps of one. And it avoids
overallocations in situations where one already has an upper bound.
c) av_fast_realloc_array() will always allocate in multiples of array
elements; no memory is wasted with partial elements.
d) By returning an int, av_fast_realloc_array() can distinguish between
ordinary allocation failures (meriting AVERROR(ENOMEM)) and failures
because of allocation limits (by returning AVERROR(ERANGE)).
e) It is no longer possible for the user to accidentally lose the
pointer by using ptr = av_fast_realloc(ptr, ...).

Because of e) there is no need to set the number of allocated elements
to zero on failure.

av_fast_realloc() usually allocates size + size / 16 + 32 bytes if size
bytes are desired and if the already existing buffer isn't big enough.
av_fast_realloc_array() instead allocates nb + (nb + 14) / 16. Rounding
up is done in order not to reallocate in steps of one if the current
number is < 16; adding 14 instead of 15 has the effect of only
allocating one element if one element is desired. This is done with an
eye towards applications where arrays might commonly only contain one
element (as happens with the Matroska CueTrackPositions).

Which of the two functions allocates faster depends upon the size of
the elements. E.g. if the elements have a size of 32B and the desired
size is incremented in steps of one, allocations happen at
1, 3, 5, 7, 9, 11, 13, 15, 17, 20, 23, 26 ... for av_fast_realloc(),
1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 21, 24 ... for
av_fast_realloc_array(). For element sizes of 96B, the numbers are
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 21, 23, 25, 27, 30 ...
for av_fast_realloc() whereas the pattern for av_fast_realloc_array() is
unchanged.

Signed-off-by: Andreas Rheinhardt 
---
This patch (and some of the other patches of this patchset)
are mostly the same as the one in these threads:
https://ffmpeg.org/pipermail/ffmpeg-devel/2019-December/254836.html
https://ffmpeg.org/pipermail/ffmpeg-devel/2020-January/255182.html

 doc/APIchanges  |  3 +++
 libavutil/mem.c | 33 +
 libavutil/mem.h | 30 ++
 libavutil/version.h |  2 +-
 4 files changed, 67 insertions(+), 1 deletion(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index 20b944933a..f633ae6fee 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -14,6 +14,9 @@ libavutil: 2021-04-27
 
 API changes, most recent first:
 
+2022-07-05 - xx - lavu 57.28.100 - mem.h
+  Add av_fast_realloc_array() to simplify reallocating of arrays.
+
 2022-06-12 - xx - lavf 59.25.100 - avio.h
   Add avio_vprintf(), similar to avio_printf() but allow to use it
   from within a function taking a variable argument list as input.
diff --git a/libavutil/mem.c b/libavutil/mem.c
index 18aff5291f..6e3942ae63 100644
--- a/libavutil/mem.c
+++ b/libavutil/mem.c
@@ -532,6 +532,39 @@ void *av_fast_realloc(void *ptr, unsigned int *size, 
size_t min_size)
 return ptr;
 }
 
+int av_fast_realloc_array(void *ptr, size_t *nb_allocated,
+  size_t min_nb, size_t max_nb, size_t elsize)
+{
+void *array;
+size_t nb, max_alloc_size_bytes;
+
+if (min_nb <= *nb_allocated)
+return 0;
+
+max_alloc_size_bytes = atomic_load_explicit(&max_alloc_size, 
memory_order_relaxed);
+max_nb = FFMIN(max_nb, max_alloc_size_bytes / elsize);
+
+if (min_nb > max_nb)
+return AVERROR(ERANGE);
+
+nb = min_nb + (min_nb + 14) / 16;
+
+/* If min_nb is so big that the above calculation overflowed,
+ * just allocate as much as we are allowed to. */
+nb = nb < min_nb ? max_nb : FFMIN(nb, max_nb);
+
+memcpy(&array, ptr, sizeof(array));
+
+array = av_realloc(array, nb * elsize);
+if (!array)
+return AVERROR(ENOMEM);
+
+memcpy(ptr, &array, sizeof(array));
+*nb_allocated = nb;
+
+return 0;
+}
+
 static inline void fast_malloc(void *ptr, unsigned int *size, size_t min_size, 
int zero_realloc)
 {
 size_t max_size;
diff --git a/libavutil/mem.h b/libavutil/mem.h
index d91174196c..f040de08fc 100644
--- a/libavutil/mem.h
+++ b/libavutil/mem.h
@@ -375,11 +375,41 @@ int av_reallocp_array(void *ptr, size_t nmemb, size_t 

[FFmpeg-devel] [PATCH 4/8] avformat/flvenc: Use array instead of linked list for index

2022-07-05 Thread Andreas Rheinhardt
Using a linked list had very much overhead (the pointer to the next
entry increased the size of the index entry struct from 16 to 24 bytes,
not to mention the overhead of having separate allocations), so it is
better to (re)allocate a continuous array for the index.
av_fast_realloc_array() is used for this purpose, in order not to
reallocate the array for each entry. It's feature of allowing to
restrict the number of elements of the array is put to good use here:
Each entry will lead to 18 bytes being written and the array is
contained in an element whose size field has a length of three bytes,
so that more than (2^24 - 1) / 18 entries make no sense.

Signed-off-by: Andreas Rheinhardt 
---
 libavformat/flvenc.c | 60 +---
 1 file changed, 23 insertions(+), 37 deletions(-)

diff --git a/libavformat/flvenc.c b/libavformat/flvenc.c
index b6135b25bf..426135aa15 100644
--- a/libavformat/flvenc.c
+++ b/libavformat/flvenc.c
@@ -34,6 +34,8 @@
 #include "libavutil/opt.h"
 #include "libavcodec/put_bits.h"
 
+/* Each FLVFileposition entry is written as two AMF double values. */
+#define FILEPOSITION_ENTRY_SIZE (2 * 9)
 
 static const AVCodecTag flv_video_codec_ids[] = {
 { AV_CODEC_ID_FLV1, FLV_CODECID_H263 },
@@ -73,7 +75,6 @@ typedef enum {
 typedef struct FLVFileposition {
 int64_t keyframe_position;
 double keyframe_timestamp;
-struct FLVFileposition *next;
 } FLVFileposition;
 
 typedef struct FLVContext {
@@ -107,9 +108,9 @@ typedef struct FLVContext {
 int acurframeindex;
 int64_t keyframes_info_offset;
 
-int64_t filepositions_count;
 FLVFileposition *filepositions;
-FLVFileposition *head_filepositions;
+unsigned filepositions_count;
+size_t filepositions_allocated;
 
 AVCodecParameters *audio_par;
 AVCodecParameters *video_par;
@@ -548,27 +549,20 @@ static void flv_write_codec_header(AVFormatContext* s, 
AVCodecParameters* par, i
 
 static int flv_append_keyframe_info(AVFormatContext *s, FLVContext *flv, 
double ts, int64_t pos)
 {
-FLVFileposition *position = av_malloc(sizeof(FLVFileposition));
-
-if (!position) {
-av_log(s, AV_LOG_WARNING, "no mem for add keyframe index!\n");
-return AVERROR(ENOMEM);
-}
-
-position->keyframe_timestamp = ts;
-position->keyframe_position = pos;
-
-if (!flv->filepositions_count) {
-flv->filepositions = position;
-flv->head_filepositions = flv->filepositions;
-position->next = NULL;
-} else {
-flv->filepositions->next = position;
-position->next = NULL;
-flv->filepositions = flv->filepositions->next;
+/* The filepositions array is part of a metadata element whose size field
+ * is three bytes long; so bound the number of filepositions in order not
+ * to allocate more than could ever be written. */
+int ret = av_fast_realloc_array(&flv->filepositions,
+&flv->filepositions_allocated,
+flv->filepositions_count + 1,
+(((1 << 24) - 1) - 10) / 
FILEPOSITION_ENTRY_SIZE,
+sizeof(*flv->filepositions));
+if (ret < 0) {
+av_log(s, AV_LOG_WARNING, "Adding entry to keyframe index failed.\n");
+return ret;
 }
 
-flv->filepositions_count++;
+flv->filepositions[flv->filepositions_count++] = (FLVFileposition){ pos, 
ts };
 
 return 0;
 }
@@ -733,7 +727,7 @@ static int flv_write_trailer(AVFormatContext *s)
 int64_t cur_pos = avio_tell(s->pb);
 
 if (build_keyframes_idx) {
-FLVFileposition *newflv_posinfo;
+const FLVFileposition *flv_posinfo = flv->filepositions;
 
 avio_seek(pb, flv->videosize_offset, SEEK_SET);
 put_amf_double(pb, flv->videosize);
@@ -758,15 +752,13 @@ static int flv_write_trailer(AVFormatContext *s)
 avio_seek(pb, flv->keyframes_info_offset, SEEK_SET);
 put_amf_string(pb, "filepositions");
 put_amf_dword_array(pb, flv->filepositions_count);
-for (newflv_posinfo = flv->head_filepositions; newflv_posinfo; 
newflv_posinfo = newflv_posinfo->next) {
-put_amf_double(pb, newflv_posinfo->keyframe_position + 
flv->keyframe_index_size);
-}
+for (unsigned i = 0; i < flv->filepositions_count; i++)
+put_amf_double(pb, flv_posinfo[i].keyframe_position + 
flv->keyframe_index_size);
 
 put_amf_string(pb, "times");
 put_amf_dword_array(pb, flv->filepositions_count);
-for (newflv_posinfo = flv->head_filepositions; newflv_posinfo; 
newflv_posinfo = newflv_posinfo->next) {
-put_amf_double(pb, newflv_posinfo->keyframe_timestamp);
-}
+for (unsigned i = 0; i < flv->filepositions_count; i++)
+put_amf_double(pb, flv_posinfo[i].keyframe_timestamp);
 
 put_amf_string(pb, "");
 avio_w8(pb, AMF_END_OF_OBJECT);
@@ -1037,15 +1029,9 @@ static

[FFmpeg-devel] [PATCH 5/8] avformat/matroskaenc: Use av_fast_realloc_array for index entries

2022-07-05 Thread Andreas Rheinhardt
Currently, the Matroska muxer reallocates its array of index entries
each time another entry is added. This is bad performance-wise,
especially on Windows where reallocations are slow. This is solved
by switching to av_fast_realloc_array() which ensures that actual
reallocations will happen only seldomly.

For an (admittedly extreme) example which consists of looping a video
consisting of a single keyframe of size 4KB 54 times this improved
the time for writing a frame from 23524201 decicycles (516466 runs,
7822 skips) to 225240 decicycles (522122 runs, 2166 skips) on Windows.

(Writing CRC-32 elements was disabled for these tests.)

Signed-off-by: Andreas Rheinhardt 
---
 libavformat/matroskaenc.c | 23 ---
 1 file changed, 16 insertions(+), 7 deletions(-)

diff --git a/libavformat/matroskaenc.c b/libavformat/matroskaenc.c
index 1256bdfe36..7c7de612de 100644
--- a/libavformat/matroskaenc.c
+++ b/libavformat/matroskaenc.c
@@ -168,7 +168,8 @@ typedef struct mkv_cuepoint {
 
 typedef struct mkv_cues {
 mkv_cuepoint   *entries;
-int num_entries;
+size_t  num_entries;
+size_t  allocated_entries;
 } mkv_cues;
 
 struct MatroskaMuxContext;
@@ -257,6 +258,10 @@ typedef struct MatroskaMuxContext {
 /** 4 * (1-byte EBML ID, 1-byte EBML size, 8-byte uint max) */
 #define MAX_CUETRACKPOS_SIZE 40
 
+/** Minimal size of CueTrack, CueClusterPosition and CueRelativePosition,
+ *  and 1 + 1 bytes for the overhead of CueTrackPositions itself. */
+#define MIN_CUETRACKPOS_SIZE (1 + 1 + 3 * (1 + 1 + 1))
+
 /** 2 + 1 Simpletag header, 2 + 1 + 8 Name "DURATION", 23B for TagString */
 #define DURATION_SIMPLETAG_SIZE (2 + 1 + (2 + 1 + 8) + 23)
 
@@ -914,16 +919,20 @@ static int mkv_add_cuepoint(MatroskaMuxContext *mkv, int 
stream, int64_t ts,
 int64_t cluster_pos, int64_t relative_pos, int64_t 
duration)
 {
 mkv_cues *cues = &mkv->cues;
-mkv_cuepoint *entries = cues->entries;
-unsigned idx = cues->num_entries;
+mkv_cuepoint *entries;
+size_t idx = cues->num_entries;
+int ret;
 
 if (ts < 0)
 return 0;
 
-entries = av_realloc_array(entries, cues->num_entries + 1, 
sizeof(mkv_cuepoint));
-if (!entries)
-return AVERROR(ENOMEM);
-cues->entries = entries;
+ret = av_fast_realloc_array(&cues->entries, &cues->allocated_entries,
+cues->num_entries + 1,
+MAX_SUPPORTED_EBML_LENGTH / 
MIN_CUETRACKPOS_SIZE,
+sizeof(*cues->entries));
+if (ret < 0)
+return ret;
+entries = cues->entries;
 
 /* Make sure the cues entries are sorted by pts. */
 while (idx > 0 && entries[idx - 1].pts > ts)
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 6/8] avcodec/movtextenc: Use av_fast_realloc_array

2022-07-05 Thread Andreas Rheinhardt
It has the advantage of not overallocating beyond the maximum
amount of entries that can be potentially used (namely UINT16_MAX).

Signed-off-by: Andreas Rheinhardt 
---
 libavcodec/movtextenc.c | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/libavcodec/movtextenc.c b/libavcodec/movtextenc.c
index 728338f2cc..fb19b110b4 100644
--- a/libavcodec/movtextenc.c
+++ b/libavcodec/movtextenc.c
@@ -76,7 +76,7 @@ typedef struct {
 ASSStyle *ass_dialog_style;
 StyleBox *style_attributes;
 unsigned  count;
-unsigned  style_attributes_bytes_allocated;
+size_t  style_attributes_allocated;
 StyleBox  style_attributes_temp;
 AVBPrint buffer;
 HighlightBox hlit;
@@ -342,6 +342,8 @@ static av_cold int mov_text_encode_init(AVCodecContext 
*avctx)
 // Start a new style box if needed
 static int mov_text_style_start(MovTextContext *s)
 {
+int ret;
+
 // there's an existing style entry
 if (s->style_attributes_temp.style_start == s->text_pos)
 // Still at same text pos, use same entry
@@ -353,10 +355,9 @@ static int mov_text_style_start(MovTextContext *s)
 StyleBox *tmp;
 
 // last style != defaults, end the style entry and start a new one
-if (s->count + 1 > FFMIN(SIZE_MAX / sizeof(*s->style_attributes), 
UINT16_MAX) ||
-!(tmp = av_fast_realloc(s->style_attributes,
-&s->style_attributes_bytes_allocated,
-(s->count + 1) * 
sizeof(*s->style_attributes {
+ret = av_fast_realloc_array(&s->style_attributes, 
&s->style_attributes_allocated,
+s->count + 1, UINT16_MAX, 
sizeof(*s->style_attributes));
+if (ret < 0) {
 mov_text_cleanup(s);
 av_bprint_clear(&s->buffer);
 s->box_flags &= ~STYL_BOX;
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 7/8] avutil/fifo: Simplify growing FIFO

2022-07-05 Thread Andreas Rheinhardt
In case the data in the FIFO currently wraps around,
move the data from the end of the old buffer to the end
of the new buffer instead of moving the data from the start
of the old buffer partially to the end of the new buffer
and partially to the start of the new buffer.
This simplifies the code.

Signed-off-by: Andreas Rheinhardt 
---
 libavutil/fifo.c | 15 +--
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/libavutil/fifo.c b/libavutil/fifo.c
index 51a5af6f39..53359a2112 100644
--- a/libavutil/fifo.c
+++ b/libavutil/fifo.c
@@ -108,17 +108,12 @@ int av_fifo_grow2(AVFifo *f, size_t inc)
 return AVERROR(ENOMEM);
 f->buffer = tmp;
 
-// move the data from the beginning of the ring buffer
-// to the newly allocated space
+// move the data from the end of the ring buffer
+// to the end of the newly allocated space
 if (f->offset_w <= f->offset_r && !f->is_empty) {
-const size_t copy = FFMIN(inc, f->offset_w);
-memcpy(tmp + f->nb_elems * f->elem_size, tmp, copy * f->elem_size);
-if (copy < f->offset_w) {
-memmove(tmp, tmp + copy * f->elem_size,
-(f->offset_w - copy) * f->elem_size);
-f->offset_w -= copy;
-} else
-f->offset_w = copy == inc ? 0 : f->nb_elems + copy;
+memmove(tmp + (f->offset_r + inc) * f->elem_size, tmp + f->offset_r * 
f->elem_size,
+(f->nb_elems - f->offset_r) * f->elem_size);
+f->offset_r += inc;
 }
 
 f->nb_elems += inc;
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 8/8] avutil/fifo: Grow FIFO faster when growing automatically

2022-07-05 Thread Andreas Rheinhardt
Up until now, when the FIFO is grown automatically, it
would be resized by double the amount needed (if possible,
i.e. if compatible with the auto-grow limit).
This implies that if e.g. the user always writes a single
element to the FIFO, the FIFO will be reallocated once
for every two writes (presuming no reads happen inbetween).
This is potentially quadratic (depending upon the realloc
implementation).

This commit changes this by using av_fast_realloc_array
to realloc the buffer. Its ability to not overallocate
beyond a given size allows to honour the user-specified
auto-grow limit.

Signed-off-by: Andreas Rheinhardt 
---
 libavutil/fifo.c | 35 +++
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/libavutil/fifo.c b/libavutil/fifo.c
index 53359a2112..04f4d057ca 100644
--- a/libavutil/fifo.c
+++ b/libavutil/fifo.c
@@ -96,6 +96,19 @@ size_t av_fifo_can_write(const AVFifo *f)
 return f->nb_elems - av_fifo_can_read(f);
 }
 
+static void fifo_readjust_after_growing(AVFifo *f, size_t old_size)
+{
+const size_t inc = f->nb_elems - old_size;
+// move the data from the end of the ring buffer
+// to the end of the newly allocated space
+if (f->offset_w <= f->offset_r && !f->is_empty) {
+memmove(f->buffer + (f->offset_r + inc) * f->elem_size,
+f->buffer + f->offset_r * f->elem_size,
+(old_size - f->offset_r) * f->elem_size);
+f->offset_r += inc;
+}
+}
+
 int av_fifo_grow2(AVFifo *f, size_t inc)
 {
 uint8_t *tmp;
@@ -107,16 +120,8 @@ int av_fifo_grow2(AVFifo *f, size_t inc)
 if (!tmp)
 return AVERROR(ENOMEM);
 f->buffer = tmp;
-
-// move the data from the end of the ring buffer
-// to the end of the newly allocated space
-if (f->offset_w <= f->offset_r && !f->is_empty) {
-memmove(tmp + (f->offset_r + inc) * f->elem_size, tmp + f->offset_r * 
f->elem_size,
-(f->nb_elems - f->offset_r) * f->elem_size);
-f->offset_r += inc;
-}
-
 f->nb_elems += inc;
+fifo_readjust_after_growing(f, f->nb_elems - inc);
 
 return 0;
 }
@@ -133,9 +138,15 @@ static int fifo_check_space(AVFifo *f, size_t to_write)
 can_grow = f->auto_grow_limit > f->nb_elems ?
f->auto_grow_limit - f->nb_elems : 0;
 if ((f->flags & AV_FIFO_FLAG_AUTO_GROW) && need_grow <= can_grow) {
-// allocate a bit more than necessary, if we can
-const size_t inc = (need_grow < can_grow / 2 ) ? need_grow * 2 : 
can_grow;
-return av_fifo_grow2(f, inc);
+// Use av_fast_realloc_array() to allocate in a fast way
+// while respecting the auto_grow_limit
+const size_t old_size = f->nb_elems;
+int ret = av_fast_realloc_array(&f->buffer, &f->nb_elems, f->nb_elems 
+ need_grow,
+f->auto_grow_limit, f->elem_size);
+if (ret < 0)
+return ret;
+fifo_readjust_after_growing(f, old_size);
+return 0;
 }
 
 return AVERROR(ENOSPC);
-- 
2.34.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 14/18] avcodec/hevcdec: Don't allocate redundant HEVCContexts

2022-07-05 Thread Michael Niedermayer
On Sat, Jul 02, 2022 at 08:32:06AM +0200, Andreas Rheinhardt wrote:
> Michael Niedermayer:
> > On Fri, Jul 01, 2022 at 12:29:45AM +0200, Andreas Rheinhardt wrote:
> >> The HEVC decoder has both HEVCContext and HEVCLocalContext
> >> structures. The latter is supposed to be the structure
> >> containing the per-slicethread state.
> >>
> >> Yet up until now that is not how it is handled in practice:
> >> Each HEVCLocalContext has a unique HEVCContext allocated for it
> >> and each of these coincides except in exactly one field: The
> >> corresponding HEVCLocalContext. This makes it possible to pass
> >> the HEVCContext everywhere where logically a HEVCLocalContext
> >> should be used. And up until recently, this is how it has been done.
> >>
> >> Yet the preceding patches changed this, making it possible
> >> to avoid allocating redundant HEVCContexts.
> >>
> >> Signed-off-by: Andreas Rheinhardt 
> >> ---
> >>  libavcodec/hevcdec.c | 40 
> >>  libavcodec/hevcdec.h |  2 --
> >>  2 files changed, 16 insertions(+), 26 deletions(-)
> >>
> >> diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c
> >> index 9d1241f293..048fcc76b4 100644
> >> --- a/libavcodec/hevcdec.c
> >> +++ b/libavcodec/hevcdec.c
> >> @@ -2548,13 +2548,12 @@ static int hls_decode_entry_wpp(AVCodecContext 
> >> *avctxt, void *hevc_lclist,
> >>  {
> >>  HEVCLocalContext *lc = ((HEVCLocalContext**)hevc_lclist)[self_id];
> >>  const HEVCContext *const s = lc->parent;
> >> -HEVCContext *s1  = avctxt->priv_data;
> >> -int ctb_size= 1<< s1->ps.sps->log2_ctb_size;
> >> +int ctb_size= 1 << s->ps.sps->log2_ctb_size;
> >>  int more_data   = 1;
> >>  int ctb_row = job;
> >> -int ctb_addr_rs = s1->sh.slice_ctb_addr_rs + ctb_row * 
> >> ((s1->ps.sps->width + ctb_size - 1) >> s1->ps.sps->log2_ctb_size);
> >> -int ctb_addr_ts = s1->ps.pps->ctb_addr_rs_to_ts[ctb_addr_rs];
> >> -int thread = ctb_row % s1->threads_number;
> >> +int ctb_addr_rs = s->sh.slice_ctb_addr_rs + ctb_row * 
> >> ((s->ps.sps->width + ctb_size - 1) >> s->ps.sps->log2_ctb_size);
> >> +int ctb_addr_ts = s->ps.pps->ctb_addr_rs_to_ts[ctb_addr_rs];
> >> +int thread = ctb_row % s->threads_number;
> >>  int ret;
> >>  
> >>  if(ctb_row) {
> >> @@ -2572,7 +2571,7 @@ static int hls_decode_entry_wpp(AVCodecContext 
> >> *avctxt, void *hevc_lclist,
> >>  
> >>  ff_thread_await_progress2(s->avctx, ctb_row, thread, 
> >> SHIFT_CTB_WPP);
> >>  
> >> -if (atomic_load(&s1->wpp_err)) {
> >> +if (atomic_load(&s->wpp_err)) {
> >>  ff_thread_report_progress2(s->avctx, ctb_row , thread, 
> >> SHIFT_CTB_WPP);
> > 
> > the consts in "const HEVCContext *const " make clang version 6.0.0-1ubuntu2 
> > unhappy
> > (this was building shared libs)
> > 
> > 
> > CC  libavcodec/hevcdec.o
> > src/libavcodec/hevcdec.c:2574:13: error: address argument to atomic 
> > operation must be a pointer to non-const _Atomic type ('const atomic_int *' 
> > (aka 'const _Atomic(int) *') invalid)
> > if (atomic_load(&s->wpp_err)) {
> > ^   ~~~
> > /usr/lib/llvm-6.0/lib/clang/6.0.0/include/stdatomic.h:134:29: note: 
> > expanded from macro 'atomic_load'
> > #define atomic_load(object) __c11_atomic_load(object, __ATOMIC_SEQ_CST)
> > ^ ~~
> > 1 error generated.
> > src/ffbuild/common.mak:81: recipe for target 'libavcodec/hevcdec.o' failed
> > make: *** [libavcodec/hevcdec.o] Error 1
> > 
> > thx
> > 
> 
> Thanks for testing this. atomic_load is indeed declared without const in
> 7.17.7.2:
> 
> C atomic_load(volatile A *object);
> 
> Upon reflection this makes sense, because if atomics are implemented via
> mutexes, even a read may involve a preceding write. So I'll cast const
> away here, too, and add a comment. (It works when casting const away,
> doesn't it?)

This doesnt feel "right". These pointers should not be coming from a const
if they are written to

The compiler accepts it with an explicit cast though. With an implicit cast
it produces a warning

thx


[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

He who knows, does not speak. He who speaks, does not know. -- Lao Tsu


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] fftools/ffmpeg_filter: Fix audio_drift_threshold check

2022-07-05 Thread Michael Niedermayer
On Tue, Jul 05, 2022 at 06:58:28PM +0200, Anton Khirnov wrote:
> Quoting Michael Niedermayer (2022-07-02 14:56:50)
> > On Sat, Jul 02, 2022 at 10:38:26AM +0200, Anton Khirnov wrote:
> > > Quoting Michael Niedermayer (2022-07-01 21:53:00)
> > > > On Thu, Jun 30, 2022 at 10:55:46AM +0200, Anton Khirnov wrote:
> > > > > Variant 2 is less bad, but the whole check seems hacky to me, since it
> > > > > seems to make assumptions about swr defaults
> > > > > 
> > > > > Won't setting this unconditionally have the same effect?
> > > > 
> > > > it has the same effect but its not so nice to the user to recommand 
> > > > extra
> > > > arguments which make no difference
> > > 
> > > Sorry, I don't follow. What is recommending any arguments to the user
> > > here?
> > 
> > i meant this thing here:
> > ./ffmpeg   -i matrixbench_mpeg2.mpg -async 1 -f null -  2>&1 | grep async
> > 
> > -async is forwarded to lavfi similarly to -af 
> > aresample=async=1:min_hard_comp=0.10:first_pts=0.
> > 
> > vs.
> > 
> > -async is forwarded to lavfi similarly to -af aresample=async=1:first_pts=0.
> 
> I don't see a problem - why would the user care how it is forwarded?

The user may want to perform the equivalent operation inside a filter 
chain/graph
or with a tool different from ffmpeg or even via the API
its really a very minor thing if these defaults are displayed too, it just
feels a bit cleaner to skip them

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If you drop bombs on a foreign country and kill a hundred thousand
innocent people, expect your government to call the consequence
"unprovoked inhuman terrorist attacks" and use it to justify dropping
more bombs and killing more people. The technology changed, the idea is old.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/8] avformat/flvenc: Add deinit function

2022-07-05 Thread Steven Liu
Andreas Rheinhardt  于2022年7月6日周三 04:27写道:
>
> Fixes memleaks when the trailer is never written or when shift_data()
> fails when writing the trailer.
>
> Signed-off-by: Andreas Rheinhardt 
> ---
> This is the same as
> https://patchwork.ffmpeg.org/project/ffmpeg/patch/20191023125944.10292-4-andreas.rheinha...@gmail.com/
>
>  libavformat/flvenc.c | 30 --
>  1 file changed, 16 insertions(+), 14 deletions(-)
>
> diff --git a/libavformat/flvenc.c b/libavformat/flvenc.c
> index 4e65ba4066..b6135b25bf 100644
> --- a/libavformat/flvenc.c
> +++ b/libavformat/flvenc.c
> @@ -733,7 +733,7 @@ static int flv_write_trailer(AVFormatContext *s)
>  int64_t cur_pos = avio_tell(s->pb);
>
>  if (build_keyframes_idx) {
> -FLVFileposition *newflv_posinfo, *p;
> +FLVFileposition *newflv_posinfo;
>
>  avio_seek(pb, flv->videosize_offset, SEEK_SET);
>  put_amf_double(pb, flv->videosize);
> @@ -768,19 +768,6 @@ static int flv_write_trailer(AVFormatContext *s)
>  put_amf_double(pb, newflv_posinfo->keyframe_timestamp);
>  }
>
> -newflv_posinfo = flv->head_filepositions;
> -while (newflv_posinfo) {
> -p = newflv_posinfo->next;
> -if (p) {
> -newflv_posinfo->next = p->next;
> -av_free(p);
> -p = NULL;
> -} else {
> -av_free(newflv_posinfo);
> -newflv_posinfo = NULL;
> -}
> -}
> -
>  put_amf_string(pb, "");
>  avio_w8(pb, AMF_END_OF_OBJECT);
>
> @@ -1047,6 +1034,20 @@ static int flv_check_bitstream(AVFormatContext *s, 
> AVStream *st,
>  return ret;
>  }
>
> +static void flv_deinit(AVFormatContext *s)
> +{
> +FLVContext *flv = s->priv_data;
> +FLVFileposition *filepos = flv->head_filepositions;
> +
> +while (filepos) {
> +FLVFileposition *next = filepos->next;
> +av_free(filepos);
> +filepos = next;
> +}
> +flv->filepositions = flv->head_filepositions = NULL;
> +flv->filepositions_count = 0;
> +}
> +
>  static const AVOption options[] = {
>  { "flvflags", "FLV muxer flags", offsetof(FLVContext, flags), 
> AV_OPT_TYPE_FLAGS, {.i64 = 0}, INT_MIN, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, 
> "flvflags" },
>  { "aac_seq_header_detect", "Put AAC sequence header based on stream 
> data", 0, AV_OPT_TYPE_CONST, {.i64 = FLV_AAC_SEQ_HEADER_DETECT}, INT_MIN, 
> INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, "flvflags" },
> @@ -1076,6 +1077,7 @@ const AVOutputFormat ff_flv_muxer = {
>  .write_header   = flv_write_header,
>  .write_packet   = flv_write_packet,
>  .write_trailer  = flv_write_trailer,
> +.deinit = flv_deinit,
>  .check_bitstream= flv_check_bitstream,
>  .codec_tag  = (const AVCodecTag* const []) {
>flv_video_codec_ids, flv_audio_codec_ids, 0
> --
> 2.34.1
>
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

 The flvenc part patchset looks ok to me.


Thanks
Steven
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread zhilizhao(赵志立)


> On Jul 5, 2022, at 10:33 PM, Gyan Doshi  wrote:
> 
> 
> 
> On 2022-07-05 07:05 pm, "zhilizhao(赵志立)" wrote:
>> 
>>> On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:
>>> 
>>> 
>>> 
>>> On 2022-07-05 01:20 pm, Zhao Zhili wrote:
 From: Zhao Zhili 
 
 Streams with all zero sample_delta in 'stts' have all zero dts.
 They have higher chance be chose by mov_find_next_sample(), which
 leads to seek again and again.
 
 For example, GoPro created a 'GoPro SOS' stream:
   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
 Metadata:
   creation_time   : 2022-06-21T08:49:19.00Z
   handler_name: GoPro SOS
 
 With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
 blocks until all samples in 'GoPro SOS' stream are consumed first.
 
 Signed-off-by: Zhao Zhili 
 ---
  libavformat/mov.c | 14 ++
  1 file changed, 14 insertions(+)
 
 diff --git a/libavformat/mov.c b/libavformat/mov.c
 index 88669faa70..2a4eb79f27 100644
 --- a/libavformat/mov.c
 +++ b/libavformat/mov.c
 @@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext 
 *pb, MOVAtom atom)
  st->nb_frames= total_sample_count;
  if (duration)
  st->duration= FFMIN(st->duration, duration);
 +
 +// All samples have zero duration. They have higher chance be chose by
 +// mov_find_next_sample, which leads to seek again and again.
 +//
 +// It's AVERROR_INVALIDDATA actually, but such files exist in the 
 wild.
 +// So only mark data stream as discarded for safety.
 +if (!duration && sc->stts_count &&
 +st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
 +av_log(c->fc, AV_LOG_WARNING,
 +   "All samples in data stream index:id [%d:%d] have zero 
 duration, "
 +   "discard the stream\n",
 +   st->index, st->id);
 +st->discard = AVDISCARD_ALL;
 +}
  sc->track_end = duration;
  return 0;
  }
>>> So this will allow audio and video streams to be demuxed, but not data?  
>>> That distinction seems arbitrary.
>> Disable audio/video streams may create regression. It’s unlikely for random
>> and broken data stream.
>> 
>>> Print a warning and assign a duration to each sample. Either 1 or if not 
>>> zero/Inf, st->duration/st->nb_frames.
>> Set sample_duration to 1 doesn’t work. Dts still far behind other streams.
>> 
>> Set sample_duration st->duration/st->nb_frames works for me, but I prefer
>> current strategy for the following reasons:
>> 
>> 1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
>> of trying correction and hope it works, which may not, e.g., st->duration
>> is broken, or bad interleave even though we fixed sample_duration.
> 
> It's not about hoping that it works.  It's about not preventing the user from 
> acquiring the stream payload.
> 
> Can you test if setting -discard:d none -i INPUT allows reading the stream 
> with your patch?

Yes it does allow reading the stream. ’stts’ box is parsed during
avformat_find_stream_info(), AVStream->discard flag can be modified
after that. The patch has no effect if user changed AVStream->discard
flag.

> 
> Regards,
> Gyan
> 
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread Gyan Doshi



On 2022-07-06 08:23 am, "zhilizhao(赵志立)" wrote:



On Jul 5, 2022, at 10:33 PM, Gyan Doshi  wrote:



On 2022-07-05 07:05 pm, "zhilizhao(赵志立)" wrote:

On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:



On 2022-07-05 01:20 pm, Zhao Zhili wrote:

From: Zhao Zhili 

Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.

For example, GoPro created a 'GoPro SOS' stream:
   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
 Metadata:
   creation_time   : 2022-06-21T08:49:19.00Z
   handler_name: GoPro SOS

With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.

Signed-off-by: Zhao Zhili 
---
  libavformat/mov.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index 88669faa70..2a4eb79f27 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, 
MOVAtom atom)
  st->nb_frames= total_sample_count;
  if (duration)
  st->duration= FFMIN(st->duration, duration);
+
+// All samples have zero duration. They have higher chance be chose by
+// mov_find_next_sample, which leads to seek again and again.
+//
+// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
+// So only mark data stream as discarded for safety.
+if (!duration && sc->stts_count &&
+st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
+av_log(c->fc, AV_LOG_WARNING,
+   "All samples in data stream index:id [%d:%d] have zero duration, 
"
+   "discard the stream\n",
+   st->index, st->id);
+st->discard = AVDISCARD_ALL;
+}
  sc->track_end = duration;
  return 0;
  }

So this will allow audio and video streams to be demuxed, but not data?  That 
distinction seems arbitrary.

Disable audio/video streams may create regression. It’s unlikely for random
and broken data stream.


Print a warning and assign a duration to each sample. Either 1 or if not zero/Inf, 
st->duration/st->nb_frames.

Set sample_duration to 1 doesn’t work. Dts still far behind other streams.

Set sample_duration st->duration/st->nb_frames works for me, but I prefer
current strategy for the following reasons:

1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
of trying correction and hope it works, which may not, e.g., st->duration
is broken, or bad interleave even though we fixed sample_duration.

It's not about hoping that it works.  It's about not preventing the user from 
acquiring the stream payload.

Can you test if setting -discard:d none -i INPUT allows reading the stream with 
your patch?

Yes it does allow reading the stream. ’stts’ box is parsed during
avformat_find_stream_info(), AVStream->discard flag can be modified
after that. The patch has no effect if user changed AVStream->discard
flag.


What's the duration of the demuxed stream?

Regards,
Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2 2/2] ffmpeg: add option -isync

2022-07-05 Thread Gyan Doshi



On 2022-07-05 10:54 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-05 19:10:33)


On 2022-07-05 09:45 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-04 10:20:22)

On 2022-07-04 11:51 am, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-02 11:51:53)

On 2022-07-02 02:12 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-07-01 13:03:04)

On 2022-07-01 03:33 pm, Anton Khirnov wrote:

Quoting Gyan Doshi (2022-06-25 10:29:51)

This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.

Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.

If both streams are using the same clock, then why is any extra
synchronization needed?

Because ffmpeg.c normalizes timestamps by default. We can keep
timestamps using -copyts, but these inputs are usually preprocessed
using single-input filters which won't have access to the reference
inputs,

No idea what you mean by "reference inputs" here.

The reference input is the one the target is being synced against. e.g.
in a karaoke session -  the music track from a DAW would be ref and the
user's voice via mic is the target.


or the merge filters like e.g. amix don't sync by timestamp.

amix does seem to look at timestamps.

amix does not *sync* by timestamp. If one input starts at 4 and the
other at 7, the 2nd isn't aligned by timestamp.

So maybe it should?

My concern generally with this patchset is that it seems like you're
changing things where it's easier to do rather than where it's correct.

There are many multi=input filters which may be used. amix is just one
example.

The basic 'deficiency' here is that filters operate upon frames and only
look at single frames for the most part, even though frames are part of
streams. These streams may have companion streams (which may be part of
programs) which are part of a single input. These inputs may have
companion inputs.  Anything in this tree may be relevant for a
particular operation as a reference, e.g. we have a bespoke filter
scale2ref so that we can look at another stream's frames. But we don't
have pad2ref, crop2ref ..etc.  So, the absolutely correct thing to do
would be to supply a global context to processing modules like
filtergraphs , maybe an array of dicts, containing attributes of all
inputs like starting time stamps, resolution, string metadata..etc. That
would obviate need for these bespoke fields and even filters.

I don't see how the second paragraph relates to the first one. scale,
pad, or crop are not multi-input filters, so why are you comparing them

scale is a singe-input filter but scale2ref is a multi-input filter
which is needed solely because there is no means at present to convey
info about other streams to a single input filter.
Similarly, we would need a crop2ref, pad2ref..etc to achieve the same
attribute transfer.  If we had a global context, these counterpart
filters wouldn't be necessary.

In my experience, global *anything* is almost always a sign of bad
design and only leads to pain and suffering. The proper solution in this
case would be making the filtergraph construction API more flexible.
Then the code that actually has all the necessary information (i.e.
ffmpeg.c or other library caller) would set the filter parameters
however you want. Then none of these whatever2ref hacks would be needed.


Some of the context data will be used by filters during runtime. So, a 
flexible API could help during init but not afterwards. The context has 
to be accessible during lifetime of filters.


About this patch, the user can already add a custom ts offset to an 
input but it has to be a pre-specified fixed constant. This patch allows 
the user to set one relative to another input. That can only be done in 
ffmpeg.c after all inputs have been opened.


Regards,
Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [avcodec/amfenc: 10 bit support 1/3] Update the min version to 1.4.23.0 for AMF SDK.

2022-07-05 Thread OvchinnikovDmitrii
---
 configure | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/configure b/configure
index fea512e8ef..29d60efad6 100755
--- a/configure
+++ b/configure
@@ -6988,7 +6988,7 @@ fi
 
 enabled amf &&
 check_cpp_condition amf "AMF/core/Version.h" \
-"(AMF_VERSION_MAJOR << 48 | AMF_VERSION_MINOR << 32 | 
AMF_VERSION_RELEASE << 16 | AMF_VERSION_BUILD_NUM) >= 0x000100040009"
+"(AMF_VERSION_MAJOR << 48 | AMF_VERSION_MINOR << 32 | 
AMF_VERSION_RELEASE << 16 | AMF_VERSION_BUILD_NUM) >= 0x000100040017"
 
 # Funny iconv installations are not unusual, so check it after all flags have 
been set
 if enabled libc_iconv; then
-- 
2.30.0.windows.2

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [avcodec/amfenc: 10 bit support 2/3] avcodec/amfenc: Fixes the color information in the output.

2022-07-05 Thread OvchinnikovDmitrii
From: Michael Fabian 'Xaymar' Dirks 


added 10 bit support for amf hevc.

before:

command - ffmpeg.exe -hide_banner -y -hwaccel d3d11va -hwaccel_output_format 
d3d11 -i test_10bit_file.mkv -an -c:v h264_amf res.dx11_hw_h264.mkv
output -  Format of input frames context (p010le) is not supported by AMF.
command - ffmpeg.exe -hide_banner -y -hwaccel d3d11va -hwaccel_output_format 
d3d11 -i test_10bit_file -an -c:v hevc_amf res.dx11_hw_hevc.mkv
output -  Format of input frames context (p010le) is not supported by AMF.

after:

command - ffmpeg.exe -hide_banner -y -hwaccel d3d11va -hwaccel_output_format 
d3d11 -i test_10bit_file -an -c:v h264_amf res.dx11_hw_h264.mkv
output -  8bit file
command - ffmpeg.exe -hide_banner -y -hwaccel d3d11va -hwaccel_output_format 
d3d11 -i test_10bit_file -an -c:v hevc_amf res.dx11_hw_hevc.mkv
output -  10bit file

---
 libavcodec/amfenc.c  |  1 +
 libavcodec/amfenc.h  |  1 +
 libavcodec/amfenc_h264.c | 46 +++-
 libavcodec/amfenc_hevc.c | 57 ++--
 4 files changed, 102 insertions(+), 3 deletions(-)

diff --git a/libavcodec/amfenc.c b/libavcodec/amfenc.c
index a033e1220e..5c38a29df1 100644
--- a/libavcodec/amfenc.c
+++ b/libavcodec/amfenc.c
@@ -72,6 +72,7 @@ static const FormatMap format_map[] =
 {
 { AV_PIX_FMT_NONE,   AMF_SURFACE_UNKNOWN },
 { AV_PIX_FMT_NV12,   AMF_SURFACE_NV12 },
+{ AV_PIX_FMT_P010,   AMF_SURFACE_P010 },
 { AV_PIX_FMT_BGR0,   AMF_SURFACE_BGRA },
 { AV_PIX_FMT_RGB0,   AMF_SURFACE_RGBA },
 { AV_PIX_FMT_GRAY8,  AMF_SURFACE_GRAY8 },
diff --git a/libavcodec/amfenc.h b/libavcodec/amfenc.h
index 1ab98d2f78..c5ee9d7e35 100644
--- a/libavcodec/amfenc.h
+++ b/libavcodec/amfenc.h
@@ -21,6 +21,7 @@
 
 #include 
 
+#include 
 #include 
 #include 
 
diff --git a/libavcodec/amfenc_h264.c b/libavcodec/amfenc_h264.c
index efb04589f6..098619f8e1 100644
--- a/libavcodec/amfenc_h264.c
+++ b/libavcodec/amfenc_h264.c
@@ -138,6 +138,9 @@ static av_cold int amf_encode_init_h264(AVCodecContext 
*avctx)
 AMFRate  framerate;
 AMFSize  framesize = 
AMFConstructSize(avctx->width, avctx->height);
 int  deblocking_filter = (avctx->flags & 
AV_CODEC_FLAG_LOOP_FILTER) ? 1 : 0;
+amf_int64color_depth;
+amf_int64color_profile;
+enum AVPixelFormat pix_fmt;
 
 if (avctx->framerate.num > 0 && avctx->framerate.den > 0) {
 framerate = AMFConstructRate(avctx->framerate.num, 
avctx->framerate.den);
@@ -195,10 +198,51 @@ static av_cold int amf_encode_init_h264(AVCodecContext 
*avctx)
 AMF_ASSIGN_PROPERTY_RATIO(res, ctx->encoder, 
AMF_VIDEO_ENCODER_ASPECT_RATIO, ratio);
 }
 
-/// Color Range (Partial/TV/MPEG or Full/PC/JPEG)
+// Color Metadata
+/// Color Range (Support for older Drivers)
 if (avctx->color_range == AVCOL_RANGE_JPEG) {
 AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, 
AMF_VIDEO_ENCODER_FULL_RANGE_COLOR, 1);
+} else {
+AMF_ASSIGN_PROPERTY_BOOL(res, ctx->encoder, 
AMF_VIDEO_ENCODER_FULL_RANGE_COLOR, 0);
+}
+/// Color Space & Depth
+pix_fmt = avctx->hw_frames_ctx ? 
((AVHWFramesContext*)avctx->hw_frames_ctx->data)->sw_format
+: avctx->pix_fmt;
+color_depth = AMF_COLOR_BIT_DEPTH_8;
+if (pix_fmt == AV_PIX_FMT_P010) {
+color_depth = AMF_COLOR_BIT_DEPTH_10;
+}
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_UNKNOWN;
+switch (avctx->colorspace) {
+case AVCOL_SPC_SMPTE170M:
+if (avctx->color_range == AVCOL_RANGE_JPEG) {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_601;
+} else {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_601;
+}
+break;
+case AVCOL_SPC_BT709:
+if (avctx->color_range == AVCOL_RANGE_JPEG) {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_709;
+} else {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_709;
+}
+break;
+case AVCOL_SPC_BT2020_NCL:
+case AVCOL_SPC_BT2020_CL:
+if (avctx->color_range == AVCOL_RANGE_JPEG) {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_FULL_2020;
+} else {
+color_profile = AMF_VIDEO_CONVERTER_COLOR_PROFILE_2020;
+}
+break;
 }
+AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, 
AMF_VIDEO_ENCODER_COLOR_BIT_DEPTH, color_depth);
+AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, 
AMF_VIDEO_ENCODER_OUTPUT_COLOR_PROFILE, color_profile);
+/// Color Transfer Characteristics (AMF matches ISO/IEC)
+AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, 
AMF_VIDEO_ENCODER_OUTPUT_TRANSFER_CHARACTERISTIC, (amf_int64)avctx->color_trc);
+/// Color Primaries (AMF matches ISO/IEC)
+AMF_ASSIGN_PROPERTY_INT64(res, ctx->encoder, 
AMF_V

[FFmpeg-devel] [avcodec/amfenc: 10 bit support 3/3] avcodec/amfenc: HDR metadata.

2022-07-05 Thread OvchinnikovDmitrii
From: nyanmisaka 

---
 libavcodec/amfenc.c | 83 +
 1 file changed, 83 insertions(+)

diff --git a/libavcodec/amfenc.c b/libavcodec/amfenc.c
index 5c38a29df1..d3edbb04b6 100644
--- a/libavcodec/amfenc.c
+++ b/libavcodec/amfenc.c
@@ -36,6 +36,57 @@
 #include "amfenc.h"
 #include "encode.h"
 #include "internal.h"
+#include "libavutil/mastering_display_metadata.h"
+
+static int amf_save_hdr_metadata(AVCodecContext *avctx, const AVFrame *frame, 
AMFHDRMetadata *hdrmeta)
+{
+AVFrameSideData*sd_display;
+AVFrameSideData*sd_light;
+AVMasteringDisplayMetadata *display_meta;
+AVContentLightMetadata *light_meta;
+
+sd_display = av_frame_get_side_data(frame, 
AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
+if (sd_display) {
+display_meta = (AVMasteringDisplayMetadata *)sd_display->data;
+if (display_meta->has_luminance) {
+const unsigned int luma_den = 1;
+hdrmeta->maxMasteringLuminance =
+(amf_uint32)(luma_den * av_q2d(display_meta->max_luminance));
+hdrmeta->minMasteringLuminance =
+FFMIN((amf_uint32)(luma_den * 
av_q2d(display_meta->min_luminance)), hdrmeta->maxMasteringLuminance);
+}
+if (display_meta->has_primaries) {
+const unsigned int chroma_den = 5;
+hdrmeta->redPrimary[0] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[0][0])), chroma_den);
+hdrmeta->redPrimary[1] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[0][1])), chroma_den);
+hdrmeta->greenPrimary[0] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[1][0])), chroma_den);
+hdrmeta->greenPrimary[1] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[1][1])), chroma_den);
+hdrmeta->bluePrimary[0] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[2][0])), chroma_den);
+hdrmeta->bluePrimary[1] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->display_primaries[2][1])), chroma_den);
+hdrmeta->whitePoint[0] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->white_point[0])), chroma_den);
+hdrmeta->whitePoint[1] =
+FFMIN((amf_uint16)(chroma_den * 
av_q2d(display_meta->white_point[1])), chroma_den);
+}
+
+sd_light = av_frame_get_side_data(frame, 
AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
+if (sd_light) {
+light_meta = (AVContentLightMetadata *)sd_light->data;
+if (light_meta) {
+hdrmeta->maxContentLightLevel = (amf_uint16)light_meta->MaxCLL;
+hdrmeta->maxFrameAverageLightLevel = 
(amf_uint16)light_meta->MaxFALL;
+}
+}
+return 0;
+}
+return 1;
+}
 
 #if CONFIG_D3D11VA
 #include 
@@ -672,6 +723,26 @@ int ff_amf_receive_packet(AVCodecContext *avctx, AVPacket 
*avpkt)
 frame_ref_storage_buffer->pVtbl->Release(frame_ref_storage_buffer);
 }
 
+// HDR10 metadata
+if (frame->color_trc == AVCOL_TRC_SMPTE2084) {
+AMFBuffer * hdrmeta_buffer = NULL;
+res = ctx->context->pVtbl->AllocBuffer(ctx->context, 
AMF_MEMORY_HOST, sizeof(AMFHDRMetadata), &hdrmeta_buffer);
+if (res == AMF_OK) {
+AMFHDRMetadata * hdrmeta = 
(AMFHDRMetadata*)hdrmeta_buffer->pVtbl->GetNative(hdrmeta_buffer);
+if (amf_save_hdr_metadata(avctx, frame, hdrmeta) == 0) {
+switch (avctx->codec->id) {
+case AV_CODEC_ID_H264:
+AMF_ASSIGN_PROPERTY_INTERFACE(res, ctx->encoder, 
AMF_VIDEO_ENCODER_INPUT_HDR_METADATA, hdrmeta_buffer); break;
+case AV_CODEC_ID_HEVC:
+AMF_ASSIGN_PROPERTY_INTERFACE(res, ctx->encoder, 
AMF_VIDEO_ENCODER_HEVC_INPUT_HDR_METADATA, hdrmeta_buffer); break;
+}
+res = amf_set_property_buffer(surface, 
L"av_frame_hdrmeta", hdrmeta_buffer);
+AMF_RETURN_IF_FALSE(avctx, res == AMF_OK, AVERROR_UNKNOWN, 
"SetProperty failed for \"av_frame_hdrmeta\" with error %d\n", res);
+}
+hdrmeta_buffer->pVtbl->Release(hdrmeta_buffer);
+}
+}
+
 surface->pVtbl->SetPts(surface, frame->pts);
 AMF_ASSIGN_PROPERTY_INT64(res, surface, PTS_PROP, frame->pts);
 
@@ -730,6 +801,18 @@ int ff_amf_receive_packet(AVCodecContext *avctx, AVPacket 
*avpkt)
 AMF_RETURN_IF_FALSE(ctx, ret >= 0, ret, "amf_copy_buffer() failed 
with error %d\n", ret);
 
 if (ctx->delayed_surface != NULL) { // try to resubmit frame
+if 
(ctx->delayed_surface->pVtbl->HasProperty(ctx->delayed_surface, 
L"av

Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread zhilizhao(赵志立)


> On Jul 6, 2022, at 12:09 PM, Gyan Doshi  wrote:
> 
> 
> 
> On 2022-07-06 08:23 am, "zhilizhao(赵志立)" wrote:
>> 
>>> On Jul 5, 2022, at 10:33 PM, Gyan Doshi  wrote:
>>> 
>>> 
>>> 
>>> On 2022-07-05 07:05 pm, "zhilizhao(赵志立)" wrote:
> On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:
> 
> 
> 
> On 2022-07-05 01:20 pm, Zhao Zhili wrote:
>> From: Zhao Zhili 
>> 
>> Streams with all zero sample_delta in 'stts' have all zero dts.
>> They have higher chance be chose by mov_find_next_sample(), which
>> leads to seek again and again.
>> 
>> For example, GoPro created a 'GoPro SOS' stream:
>>   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s 
>> (default)
>> Metadata:
>>   creation_time   : 2022-06-21T08:49:19.00Z
>>   handler_name: GoPro SOS
>> 
>> With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
>> blocks until all samples in 'GoPro SOS' stream are consumed first.
>> 
>> Signed-off-by: Zhao Zhili 
>> ---
>>  libavformat/mov.c | 14 ++
>>  1 file changed, 14 insertions(+)
>> 
>> diff --git a/libavformat/mov.c b/libavformat/mov.c
>> index 88669faa70..2a4eb79f27 100644
>> --- a/libavformat/mov.c
>> +++ b/libavformat/mov.c
>> @@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, 
>> AVIOContext *pb, MOVAtom atom)
>>  st->nb_frames= total_sample_count;
>>  if (duration)
>>  st->duration= FFMIN(st->duration, duration);
>> +
>> +// All samples have zero duration. They have higher chance be chose 
>> by
>> +// mov_find_next_sample, which leads to seek again and again.
>> +//
>> +// It's AVERROR_INVALIDDATA actually, but such files exist in the 
>> wild.
>> +// So only mark data stream as discarded for safety.
>> +if (!duration && sc->stts_count &&
>> +st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
>> +av_log(c->fc, AV_LOG_WARNING,
>> +   "All samples in data stream index:id [%d:%d] have zero 
>> duration, "
>> +   "discard the stream\n",
>> +   st->index, st->id);
>> +st->discard = AVDISCARD_ALL;
>> +}
>>  sc->track_end = duration;
>>  return 0;
>>  }
> So this will allow audio and video streams to be demuxed, but not data?  
> That distinction seems arbitrary.
 Disable audio/video streams may create regression. It’s unlikely for random
 and broken data stream.
 
> Print a warning and assign a duration to each sample. Either 1 or if not 
> zero/Inf, st->duration/st->nb_frames.
 Set sample_duration to 1 doesn’t work. Dts still far behind other streams.
 
 Set sample_duration st->duration/st->nb_frames works for me, but I prefer
 current strategy for the following reasons:
 
 1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
 of trying correction and hope it works, which may not, e.g., st->duration
 is broken, or bad interleave even though we fixed sample_duration.
>>> It's not about hoping that it works.  It's about not preventing the user 
>>> from acquiring the stream payload.
>>> 
>>> Can you test if setting -discard:d none -i INPUT allows reading the stream 
>>> with your patch?
>> Yes it does allow reading the stream. ’stts’ box is parsed during
>> avformat_find_stream_info(), AVStream->discard flag can be modified
>> after that. The patch has no effect if user changed AVStream->discard
>> flag.
> 
> What's the duration of the demuxed stream?

The demuxed data track has correct duration since there is a check
```
if (duration)
st->duration= FFMIN(st->duration, duration);
```
st->duration comes from ‘mdhd’ and not overwrite in this case.

Every packet has zero as timestamp, as expected:

./ffmpeg -debug_ts  -discard:d none -i ~/tmp/gopro.mp4 -map 0:4 -c copy 
-copy_unknown -f data /tmp/test

demuxer -> ist_index:4 type:data next_dts:0 next_dts_time:0 next_pts:0 
next_pts_time:0 pkt_pts:0 pkt_pts_time:0 pkt_dts:0 pkt_dts_time:0 duration:0 
duration_time:0 off:0 off_time:0
demuxer+ffmpeg -> ist_index:4 type:data pkt_pts:0 pkt_pts_time:0 pkt_dts:0 
pkt_dts_time:0 duration:0 duration_time:0 off:0 off_time:0
muxer <- type:data pkt_pts:0 pkt_pts_time:0 pkt_dts:0 pkt_dts_time:0 duration:0 
duration_time:0 size:16

> 
> Regards,
> Gyan
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "

Re: [FFmpeg-devel] [PATCH] avformat/mov: discard data streams with all zero sample_delta

2022-07-05 Thread Gyan Doshi



On 2022-07-06 10:54 am, "zhilizhao(赵志立)" wrote:



On Jul 6, 2022, at 12:09 PM, Gyan Doshi  wrote:



On 2022-07-06 08:23 am, "zhilizhao(赵志立)" wrote:

On Jul 5, 2022, at 10:33 PM, Gyan Doshi  wrote:



On 2022-07-05 07:05 pm, "zhilizhao(赵志立)" wrote:

On Jul 5, 2022, at 8:07 PM, Gyan Doshi  wrote:



On 2022-07-05 01:20 pm, Zhao Zhili wrote:

From: Zhao Zhili 

Streams with all zero sample_delta in 'stts' have all zero dts.
They have higher chance be chose by mov_find_next_sample(), which
leads to seek again and again.

For example, GoPro created a 'GoPro SOS' stream:
   Stream #0:4[0x5](eng): Data: none (fdsc / 0x63736466), 13 kb/s (default)
 Metadata:
   creation_time   : 2022-06-21T08:49:19.00Z
   handler_name: GoPro SOS

With 'ffprobe -show_frames http://example.com/gopro.mp4', ffprobe
blocks until all samples in 'GoPro SOS' stream are consumed first.

Signed-off-by: Zhao Zhili 
---
  libavformat/mov.c | 14 ++
  1 file changed, 14 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index 88669faa70..2a4eb79f27 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -3062,6 +3062,20 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, 
MOVAtom atom)
  st->nb_frames= total_sample_count;
  if (duration)
  st->duration= FFMIN(st->duration, duration);
+
+// All samples have zero duration. They have higher chance be chose by
+// mov_find_next_sample, which leads to seek again and again.
+//
+// It's AVERROR_INVALIDDATA actually, but such files exist in the wild.
+// So only mark data stream as discarded for safety.
+if (!duration && sc->stts_count &&
+st->codecpar->codec_type == AVMEDIA_TYPE_DATA) {
+av_log(c->fc, AV_LOG_WARNING,
+   "All samples in data stream index:id [%d:%d] have zero duration, 
"
+   "discard the stream\n",
+   st->index, st->id);
+st->discard = AVDISCARD_ALL;
+}
  sc->track_end = duration;
  return 0;
  }

So this will allow audio and video streams to be demuxed, but not data?  That 
distinction seems arbitrary.

Disable audio/video streams may create regression. It’s unlikely for random
and broken data stream.


Print a warning and assign a duration to each sample. Either 1 or if not zero/Inf, 
st->duration/st->nb_frames.

Set sample_duration to 1 doesn’t work. Dts still far behind other streams.

Set sample_duration st->duration/st->nb_frames works for me, but I prefer
current strategy for the following reasons:

1. AVDISCARD_ALL is more close to AVERROR_INVALIDDATA by giving up instead
of trying correction and hope it works, which may not, e.g., st->duration
is broken, or bad interleave even though we fixed sample_duration.

It's not about hoping that it works.  It's about not preventing the user from 
acquiring the stream payload.

Can you test if setting -discard:d none -i INPUT allows reading the stream with 
your patch?

Yes it does allow reading the stream. ’stts’ box is parsed during
avformat_find_stream_info(), AVStream->discard flag can be modified
after that. The patch has no effect if user changed AVStream->discard
flag.

What's the duration of the demuxed stream?

The demuxed data track has correct duration since there is a check
```
 if (duration)
 st->duration= FFMIN(st->duration, duration);
```
st->duration comes from ‘mdhd’ and not overwrite in this case.

Every packet has zero as timestamp, as expected:

./ffmpeg -debug_ts  -discard:d none -i ~/tmp/gopro.mp4 -map 0:4 -c copy 
-copy_unknown -f data /tmp/test

demuxer -> ist_index:4 type:data next_dts:0 next_dts_time:0 next_pts:0 
next_pts_time:0 pkt_pts:0 pkt_pts_time:0 pkt_dts:0 pkt_dts_time:0 duration:0 
duration_time:0 off:0 off_time:0
demuxer+ffmpeg -> ist_index:4 type:data pkt_pts:0 pkt_pts_time:0 pkt_dts:0 
pkt_dts_time:0 duration:0 duration_time:0 off:0 off_time:0
muxer <- type:data pkt_pts:0 pkt_pts_time:0 pkt_dts:0 pkt_dts_time:0 duration:0 
duration_time:0 size:16


Ok, change the log from

"discard the stream"

to

"stream set to be discarded. Override using -discard or AVStream->discard"

Regards,
Gyan
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 2/2] fftools/ffmpeg_filter: Fix audio_drift_threshold check

2022-07-05 Thread Anton Khirnov
Quoting Michael Niedermayer (2022-07-06 00:36:25)
> On Tue, Jul 05, 2022 at 06:58:28PM +0200, Anton Khirnov wrote:
> > Quoting Michael Niedermayer (2022-07-02 14:56:50)
> > > On Sat, Jul 02, 2022 at 10:38:26AM +0200, Anton Khirnov wrote:
> > > > Quoting Michael Niedermayer (2022-07-01 21:53:00)
> > > > > On Thu, Jun 30, 2022 at 10:55:46AM +0200, Anton Khirnov wrote:
> > > > > > Variant 2 is less bad, but the whole check seems hacky to me, since 
> > > > > > it
> > > > > > seems to make assumptions about swr defaults
> > > > > > 
> > > > > > Won't setting this unconditionally have the same effect?
> > > > > 
> > > > > it has the same effect but its not so nice to the user to recommand 
> > > > > extra
> > > > > arguments which make no difference
> > > > 
> > > > Sorry, I don't follow. What is recommending any arguments to the user
> > > > here?
> > > 
> > > i meant this thing here:
> > > ./ffmpeg   -i matrixbench_mpeg2.mpg -async 1 -f null -  2>&1 | grep async
> > > 
> > > -async is forwarded to lavfi similarly to -af 
> > > aresample=async=1:min_hard_comp=0.10:first_pts=0.
> > > 
> > > vs.
> > > 
> > > -async is forwarded to lavfi similarly to -af 
> > > aresample=async=1:first_pts=0.
> > 
> > I don't see a problem - why would the user care how it is forwarded?
> 
> The user may want to perform the equivalent operation inside a filter 
> chain/graph
> or with a tool different from ffmpeg or even via the API
> its really a very minor thing if these defaults are displayed too, it just
> feels a bit cleaner to skip them

Then the correct thing to do is retrieve the swr default value via the
AVOption API. Though IMO it's extra complexity for marginal gain.

I also wonder if this option should exist at all, given that it does
nothing but set a swr option. It is also global, so you can't have
different values for different streams.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".