[FFmpeg-devel] [PATCH] Fix incorrect enum type used for a local variable

2024-10-24 Thread Eugene Zemtsov via ffmpeg-devel
From: Eugene Zemtsov 

It's AVPacketSideDataType, not AVFrameSideDataType.

Bug: https://issues.chromium.org/issues/374797732
Change-Id: If75702c6d639ca63827cc3370477de00544d3c0f
Reviewed-on: 
https://chromium-review.googlesource.com/c/chromium/third_party/ffmpeg/+/5950926
Reviewed-by: Ted (Chromium) Meyer 
---
 libavcodec/decode.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavcodec/decode.c b/libavcodec/decode.c
index c331bb8596..fbb6be2971 100644
--- a/libavcodec/decode.c
+++ b/libavcodec/decode.c
@@ -1466,7 +1466,7 @@ static int side_data_map(AVFrame *dst,
 
 {
 for (int i = 0; map[i].packet < AV_PKT_DATA_NB; i++) {
-const enum AVFrameSideDataType type_pkt   = map[i].packet;
+const enum AVPacketSideDataType type_pkt   = map[i].packet;
 const enum AVFrameSideDataType type_frame = map[i].frame;
 const AVPacketSideData *sd_pkt;
 AVFrameSideData *sd_frame;
-- 
2.47.0.163.g1226f6d8fa-goog

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] jpeg200dec: reset in_tile_headers flag between frames

2024-10-24 Thread WATANABE Osamu
It looks good to me. Thank you for catching this.
FYI - The patch has been confirmed to pass all the test cases defined in 
ISO/IEC 15444-4.

> On Oct 25, 2024, at 9:28, Pierre-Anthony Lemieux  wrote:
> 
> Fixes https://trac.ffmpeg.org/ticket/11266
> 
> On Thu, Oct 24, 2024 at 5:23?PM  wrote:
>> 
>> From: Pierre-Anthony Lemieux 
>> 
>> ---
>> libavcodec/jpeg2000dec.c | 19 ++-
>> libavcodec/jpeg2000dec.h |  1 -
>> 2 files changed, 10 insertions(+), 10 deletions(-)
>> 
>> diff --git a/libavcodec/jpeg2000dec.c b/libavcodec/jpeg2000dec.c
>> index 2e09b279dc..5b05ff2455 100644
>> --- a/libavcodec/jpeg2000dec.c
>> +++ b/libavcodec/jpeg2000dec.c
>> @@ -2402,6 +2402,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> Jpeg2000QuantStyle *qntsty  = s->qntsty;
>> Jpeg2000POC *poc= &s->poc;
>> uint8_t *properties = s->properties;
>> +uint8_t in_tile_headers = 0;
>> 
>> for (;;) {
>> int len, ret = 0;
>> @@ -2484,7 +2485,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_cap(s, codsty);
>> break;
>> case JPEG2000_COC:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "COC marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2492,7 +2493,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_coc(s, codsty, properties);
>> break;
>> case JPEG2000_COD:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "COD marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2500,7 +2501,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_cod(s, codsty, properties);
>> break;
>> case JPEG2000_RGN:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "RGN marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2512,7 +2513,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> }
>> break;
>> case JPEG2000_QCC:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "QCC marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2520,7 +2521,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_qcc(s, len, qntsty, properties);
>> break;
>> case JPEG2000_QCD:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "QCD marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2528,7 +2529,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_qcd(s, len, qntsty, properties);
>> break;
>> case JPEG2000_POC:
>> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>> av_log(s->avctx, AV_LOG_ERROR,
>> "POC marker found in a tile header but the codestream 
>> belongs to the HOMOGENEOUS set\n");
>> return AVERROR_INVALIDDATA;
>> @@ -2536,8 +2537,8 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> ret = get_poc(s, len, poc);
>> break;
>> case JPEG2000_SOT:
>> -if (!s->in_tile_headers) {
>> -s->in_tile_headers = 1;
>> +if (!in_tile_headers) {
>> +in_tile_headers = 1;
>> if (s->has_ppm) {
>> bytestream2_init(&s->packed_headers_stream, 
>> s->packed_headers, s->packed_headers_size);
>> }
>> @@ -2569,7 +2570,7 @@ static int 
>> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>> break;

[FFmpeg-devel] [PATCH v4 1/3] tests/fate-run: add support for specifying the final encode muxer in `transcode`

2024-10-24 Thread Jan Ekström
From: Jan Ekström 

This allows for direct dumping of the packets' contents (useful for
text based formats), while getting the timestamps/sizes etc from
ffprobe.

If used via TRANSCODE, the actually utilized muxer should be added
within the last argument as an additional dependency, as that is not
done automatically.

Signed-off-by: Jan Ekström 
---
 tests/fate-run.sh | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tests/fate-run.sh b/tests/fate-run.sh
index f8d67de25a..9a47830464 100755
--- a/tests/fate-run.sh
+++ b/tests/fate-run.sh
@@ -257,7 +257,9 @@ transcode(){
 additional_input=$7
 final_decode=$8
 enc_opt_in=$9
+final_encode_muxer="${10}"
 test -z "$additional_input" || additional_input="$DEC_OPTS 
$additional_input"
+test -z "$final_encode_muxer" && final_encode_muxer="framecrc"
 encfile="${outdir}/${test}.${enc_fmt}"
 test $keep -ge 1 || cleanfiles="$cleanfiles $encfile"
 tsrcfile=$(target_path $srcfile)
@@ -267,7 +269,7 @@ transcode(){
 do_md5sum $encfile
 echo $(wc -c $encfile)
 ffmpeg $DEC_OPTS $final_decode -i $tencfile $ENC_OPTS $FLAGS $final_encode 
\
--f framecrc - || return
+-f $final_encode_muxer - || return
 test -z "$ffprobe_opts" || \
 run ffprobe${PROGSUF}${EXECSUF} -bitexact $ffprobe_opts $tencfile || 
return
 }
-- 
2.47.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] jpeg200dec: reset in_tile_headers flag between frames

2024-10-24 Thread Pierre-Anthony Lemieux
Fixes https://trac.ffmpeg.org/ticket/11266

On Thu, Oct 24, 2024 at 5:23 PM  wrote:
>
> From: Pierre-Anthony Lemieux 
>
> ---
>  libavcodec/jpeg2000dec.c | 19 ++-
>  libavcodec/jpeg2000dec.h |  1 -
>  2 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/libavcodec/jpeg2000dec.c b/libavcodec/jpeg2000dec.c
> index 2e09b279dc..5b05ff2455 100644
> --- a/libavcodec/jpeg2000dec.c
> +++ b/libavcodec/jpeg2000dec.c
> @@ -2402,6 +2402,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  Jpeg2000QuantStyle *qntsty  = s->qntsty;
>  Jpeg2000POC *poc= &s->poc;
>  uint8_t *properties = s->properties;
> +uint8_t in_tile_headers = 0;
>
>  for (;;) {
>  int len, ret = 0;
> @@ -2484,7 +2485,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_cap(s, codsty);
>  break;
>  case JPEG2000_COC:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "COC marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2492,7 +2493,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_coc(s, codsty, properties);
>  break;
>  case JPEG2000_COD:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "COD marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2500,7 +2501,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_cod(s, codsty, properties);
>  break;
>  case JPEG2000_RGN:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "RGN marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2512,7 +2513,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  }
>  break;
>  case JPEG2000_QCC:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "QCC marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2520,7 +2521,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_qcc(s, len, qntsty, properties);
>  break;
>  case JPEG2000_QCD:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "QCD marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2528,7 +2529,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_qcd(s, len, qntsty, properties);
>  break;
>  case JPEG2000_POC:
> -if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
> +if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
>  av_log(s->avctx, AV_LOG_ERROR,
>  "POC marker found in a tile header but the codestream 
> belongs to the HOMOGENEOUS set\n");
>  return AVERROR_INVALIDDATA;
> @@ -2536,8 +2537,8 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  ret = get_poc(s, len, poc);
>  break;
>  case JPEG2000_SOT:
> -if (!s->in_tile_headers) {
> -s->in_tile_headers = 1;
> +if (!in_tile_headers) {
> +in_tile_headers = 1;
>  if (s->has_ppm) {
>  bytestream2_init(&s->packed_headers_stream, 
> s->packed_headers, s->packed_headers_size);
>  }
> @@ -2569,7 +2570,7 @@ static int 
> jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
>  break;
>  case JPEG2000_PPM:
>  // Packed headers, main header
> -if (s->in_tile_headers) {
> +if (in_tile_headers) {
>  av_log(s->avctx, AV_LOG_ERROR, "PPM Marker can only be in 
> Main header\n");
> 

[FFmpeg-devel] [PATCH 2/5] avcodec/aom_film_grain: allocate film grain metadata dynamically

2024-10-24 Thread James Almer
This removes the ABI breaking use of sizeof(AVFilmGrainParams), and achieves the
same size reduction to decoder structs as 
08b1bffa49715a9615acc025dfbea252d8409e1f.

Signed-off-by: James Almer 
---
 libavcodec/aom_film_grain.c | 47 +
 libavcodec/aom_film_grain.h |  6 -
 libavcodec/h2645_sei.c  | 12 +-
 libavcodec/hevc/hevcdec.c   |  1 -
 4 files changed, 53 insertions(+), 13 deletions(-)

diff --git a/libavcodec/aom_film_grain.c b/libavcodec/aom_film_grain.c
index e302567ba5..de4437fd16 100644
--- a/libavcodec/aom_film_grain.c
+++ b/libavcodec/aom_film_grain.c
@@ -26,7 +26,9 @@
  */
 
 #include "libavutil/avassert.h"
+#include "libavutil/buffer.h"
 #include "libavutil/imgutils.h"
+#include "libavutil/mem.h"
 
 #include "aom_film_grain.h"
 #include "get_bits.h"
@@ -124,7 +126,7 @@ int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params *s,
 {
 GetBitContext gbc, *gb = &gbc;
 AVFilmGrainAOMParams *aom;
-AVFilmGrainParams *fgp, *ref = NULL;
+AVFilmGrainParams *fgp = NULL, *ref = NULL;
 int ret, num_sets, n, i, uv, num_y_coeffs, update_grain, luma_only;
 
 ret = init_get_bits8(gb, payload, payload_size);
@@ -135,28 +137,38 @@ int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params 
*s,
 if (!s->enable)
 return 0;
 
+for (int i = 0; i < FF_ARRAY_ELEMS(s->sets); i++)
+av_buffer_unref(&s->sets[i]);
+
 skip_bits(gb, 4); // reserved
 num_sets = get_bits(gb, 3) + 1;
 for (n = 0; n < num_sets; n++) {
 int payload_4byte, payload_size, set_idx, apply_units_log2, vsc_flag;
 int predict_scaling, predict_y_scaling, predict_uv_scaling[2];
 int payload_bits, start_position;
+size_t fgp_size;
 
 start_position = get_bits_count(gb);
 payload_4byte = get_bits1(gb);
 payload_size = get_bits(gb, payload_4byte ? 2 : 8);
 set_idx = get_bits(gb, 3);
-fgp = &s->sets[set_idx];
+fgp = av_film_grain_params_alloc(&fgp_size);
+if (!fgp)
+goto error;
 aom = &fgp->codec.aom;
 
 fgp->type = get_bits1(gb) ? AV_FILM_GRAIN_PARAMS_AV1 : 
AV_FILM_GRAIN_PARAMS_NONE;
-if (!fgp->type)
+if (!fgp->type) {
+av_freep(&fgp);
 continue;
+}
 
 fgp->seed = get_bits(gb, 16);
 update_grain = get_bits1(gb);
-if (!update_grain)
+if (!update_grain) {
+av_freep(&fgp);
 continue;
+}
 
 apply_units_log2  = get_bits(gb, 4);
 fgp->width  = get_bits(gb, 12) << apply_units_log2;
@@ -330,32 +342,47 @@ int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params 
*s,
 if (payload_bits > payload_size * 8)
 goto error;
 skip_bits(gb, payload_size * 8 - payload_bits);
+
+av_buffer_unref(&s->sets[set_idx]);
+s->sets[set_idx] = av_buffer_create((uint8_t *)fgp, fgp_size, NULL, 
NULL, 0);
+if (!s->sets[set_idx])
+goto error;
 }
 return 0;
 
 error:
-memset(s, 0, sizeof(*s));
+av_free(fgp);
+ff_aom_film_grain_uninit_params(s);
 return AVERROR_INVALIDDATA;
 }
 
 int ff_aom_attach_film_grain_sets(const AVFilmGrainAFGS1Params *s, AVFrame 
*frame)
 {
-AVFilmGrainParams *fgp;
 if (!s->enable)
 return 0;
 
 for (int i = 0; i < FF_ARRAY_ELEMS(s->sets); i++) {
-if (s->sets[i].type != AV_FILM_GRAIN_PARAMS_AV1)
+AVBufferRef *buf;
+
+if (!s->sets[i])
 continue;
-fgp = av_film_grain_params_create_side_data(frame);
-if (!fgp)
+
+buf = av_buffer_ref(s->sets[i]);
+if (!buf || !av_frame_new_side_data_from_buf(frame,
+ 
AV_FRAME_DATA_FILM_GRAIN_PARAMS, buf))
 return AVERROR(ENOMEM);
-memcpy(fgp, &s->sets[i], sizeof(*fgp));
 }
 
 return 0;
 }
 
+void ff_aom_film_grain_uninit_params(AVFilmGrainAFGS1Params *s)
+{
+for (int i = 0; i < FF_ARRAY_ELEMS(s->sets); i++)
+av_buffer_unref(&s->sets[i]);
+s->enable = 0;
+}
+
 // Taken from the AV1 spec. Range is [-2048, 2047], mean is 0 and stddev is 512
 static const int16_t gaussian_sequence[2048] = {
 56,568,   -180,  172,   124,   -84,   172,   -64,   -900,  24,   820,
diff --git a/libavcodec/aom_film_grain.h b/libavcodec/aom_film_grain.h
index 1f8c78f657..94cd9d9f67 100644
--- a/libavcodec/aom_film_grain.h
+++ b/libavcodec/aom_film_grain.h
@@ -28,11 +28,12 @@
 #ifndef AVCODEC_AOM_FILM_GRAIN_H
 #define AVCODEC_AOM_FILM_GRAIN_H
 
+#include "libavutil/buffer.h"
 #include "libavutil/film_grain_params.h"
 
 typedef struct AVFilmGrainAFGS1Params {
 int enable;
-AVFilmGrainParams sets[8];
+AVBufferRef *sets[8];
 } AVFilmGrainAFGS1Params;
 
 // Synthesizes film grain on top of `in` and stores the result to `out`. `out`
@@ -48,4 +49,7 @@ int ff_aom_parse_film_grain_sets(AVFilmGrainAFGS1Params *s,
 // Attach all valid film grain pa

Re: [FFmpeg-devel] [FFmpeg-cvslog] fftools/ffplay: constrain supported YUV color spaces

2024-10-24 Thread Michael Niedermayer
On Thu, Oct 24, 2024 at 10:55:40AM +0200, Niklas Haas wrote:
> On Thu, 24 Oct 2024 02:44:34 +0200 Michael Niedermayer 
>  wrote:
> > On Fri, Feb 09, 2024 at 08:14:46PM +, Niklas Haas wrote:
> > > ffmpeg | branch: master | Niklas Haas  | Mon Feb  5 
> > > 19:28:04 2024 +0100| [c619d20906d039060efbeaa822daf8e949f3ef24] | 
> > > committer: Niklas Haas
> > >
> > > fftools/ffplay: constrain supported YUV color spaces
> > >
> > > SDL supports only these three matrices. Actually, it only supports these
> > > three combinations: BT.601+JPEG, BT.601+MPEG, BT.709+MPEG, but we have
> > > no way to restrict the specific *combination* of YUV range and YUV
> > > colorspace with the current filter design.
> > >
> > > See-Also: https://trac.ffmpeg.org/ticket/10839
> > >
> > > Instead of an incorrect conversion result, trying to play a YCgCo file
> > > with ffplay will simply error out with a "No conversion possible" error.
> > >
> > > > http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=c619d20906d039060efbeaa822daf8e949f3ef24
> > > ---
> > >
> > >  fftools/ffplay.c | 9 +
> > >  1 file changed, 9 insertions(+)
> >
> > this causes a regression with
> >
> > ./ffplay test25.nut
> > use the mouse to scale it up a bit to better see
> > and then repeatly press arrow left, each time on the right side a different
> > mix of colors appears
> >
> > test25.nut is a mpeg4 yuv420p video with 25x25 size, infact codec and 
> > content seem not to matter
> >
> > ==1071990== Conditional jump or move depends on uninitialised value(s)
> > ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> > ==1071990==by 0x13260C3: yuv2plane1_8_c (output.c:426)
> > ==1071990==by 0x12FCC0D: lum_planar_vscale (vscale.c:53)
> > ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> > ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> > ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> > ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> > ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> > ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> > ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> > ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> > ==1071990==by 0x6A85352: clone (clone.S:95)
> > ==1071990==
> > ==1071990== Thread 83:
> > ==1071990== Conditional jump or move depends on uninitialised value(s)
> > ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> > ==1071990==by 0x132602D: yuv2planeX_8_c (output.c:416)
> > ==1071990==by 0x12FD038: chr_planar_vscale (vscale.c:100)
> > ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> > ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> > ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> > ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> > ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> > ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> > ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> > ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> > ==1071990==by 0x6A85352: clone (clone.S:95)
> > ==1071990==
> > ==1071990== Conditional jump or move depends on uninitialised value(s)
> > ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> > ==1071990==by 0x132602D: yuv2planeX_8_c (output.c:416)
> > ==1071990==by 0x12FD09E: chr_planar_vscale (vscale.c:101)
> > ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> > ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> > ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> > ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> > ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> > ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> > ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> > ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> > ==1071990==by 0x6A85352: clone (clone.S:95)
> >
> > [...]
> > --
> > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
> >
> > "I am not trying to be anyone's saviour, I'm trying to think about the
> >  future and not be sad" - Elon Musk
> 
> Possibly an issue inside swscale, but I can't immediately see what. Can you
> please open a ticket for it please and assign it to me?

https://trac.ffmpeg.org/ticket/11265

thx

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Avoid a single point of failure, be that a person or equipment.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [FFmpeg-cvslog] fftools/ffplay: constrain supported YUV color spaces

2024-10-24 Thread Niklas Haas
On Thu, 24 Oct 2024 02:44:34 +0200 Michael Niedermayer  
wrote:
> On Fri, Feb 09, 2024 at 08:14:46PM +, Niklas Haas wrote:
> > ffmpeg | branch: master | Niklas Haas  | Mon Feb  5 
> > 19:28:04 2024 +0100| [c619d20906d039060efbeaa822daf8e949f3ef24] | 
> > committer: Niklas Haas
> >
> > fftools/ffplay: constrain supported YUV color spaces
> >
> > SDL supports only these three matrices. Actually, it only supports these
> > three combinations: BT.601+JPEG, BT.601+MPEG, BT.709+MPEG, but we have
> > no way to restrict the specific *combination* of YUV range and YUV
> > colorspace with the current filter design.
> >
> > See-Also: https://trac.ffmpeg.org/ticket/10839
> >
> > Instead of an incorrect conversion result, trying to play a YCgCo file
> > with ffplay will simply error out with a "No conversion possible" error.
> >
> > > http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=c619d20906d039060efbeaa822daf8e949f3ef24
> > ---
> >
> >  fftools/ffplay.c | 9 +
> >  1 file changed, 9 insertions(+)
>
> this causes a regression with
>
> ./ffplay test25.nut
> use the mouse to scale it up a bit to better see
> and then repeatly press arrow left, each time on the right side a different
> mix of colors appears
>
> test25.nut is a mpeg4 yuv420p video with 25x25 size, infact codec and content 
> seem not to matter
>
> ==1071990== Conditional jump or move depends on uninitialised value(s)
> ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> ==1071990==by 0x13260C3: yuv2plane1_8_c (output.c:426)
> ==1071990==by 0x12FCC0D: lum_planar_vscale (vscale.c:53)
> ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> ==1071990==by 0x6A85352: clone (clone.S:95)
> ==1071990==
> ==1071990== Thread 83:
> ==1071990== Conditional jump or move depends on uninitialised value(s)
> ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> ==1071990==by 0x132602D: yuv2planeX_8_c (output.c:416)
> ==1071990==by 0x12FD038: chr_planar_vscale (vscale.c:100)
> ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> ==1071990==by 0x6A85352: clone (clone.S:95)
> ==1071990==
> ==1071990== Conditional jump or move depends on uninitialised value(s)
> ==1071990==at 0x1324673: av_clip_uint8_c (common.h:208)
> ==1071990==by 0x132602D: yuv2planeX_8_c (output.c:416)
> ==1071990==by 0x12FD09E: chr_planar_vscale (vscale.c:101)
> ==1071990==by 0x12F0E0A: swscale (swscale.c:498)
> ==1071990==by 0x12F320A: scale_internal (swscale.c:1046)
> ==1071990==by 0x12F239C: scale_cascaded (swscale.c:875)
> ==1071990==by 0x12F27D4: scale_internal (swscale.c:937)
> ==1071990==by 0x12F3BBD: ff_sws_slice_worker (swscale.c:1240)
> ==1071990==by 0x13DC790: run_jobs (slicethread.c:65)
> ==1071990==by 0x13DC866: thread_worker (slicethread.c:89)
> ==1071990==by 0x694B608: start_thread (pthread_create.c:477)
> ==1071990==by 0x6A85352: clone (clone.S:95)
>
> [...]
> --
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
>
> "I am not trying to be anyone's saviour, I'm trying to think about the
>  future and not be sad" - Elon Musk

Possibly an issue inside swscale, but I can't immediately see what. Can you
please open a ticket for it please and assign it to me?

Thanks
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 14/18] swscale/graph: add new high-level scaler dispatch mechanism

2024-10-24 Thread Anton Khirnov
Quoting Niklas Haas (2024-10-24 12:02:41)
> On Thu, 24 Oct 2024 11:30:12 +0200 Anton Khirnov  wrote:
> > Does this (or can it) support copy-free passthrough of individual
> > planes, for cases like YUV420P<->NV12?
> 
> Not currently, no. We could switch to AVBufferRefs for the plane pointers to
> add this functionality down the line, but it's not a high priority because
> doing this will require the much harder problem of rewriting the underlying
> scaler dispatch logic to begin with.
> 
> Doing this would not be terribly difficult either way, but the problem is that
> swscale currently does not exactly have a good concept of what's happening
> to each plane - it's all a jumble of ad-hoc cases.
> 
> One of my plans for SwsGraph is to first make a list of operations to perform
> on each plane, and then eliminate reduntant passes to figure out what special
> cases and/or noop passes can be optimized. But this has to wait a bit, as I'm
> first working on the immediate goal of adding support for more complex
> colorspaces (by chaining together multiple scaling passes).

Right, as long as it's reasonably implementable down the line I have no
objections. My concern was mainly about locking ourselves into a
high-level API that does not allow this, and this constraint then
propagating into the lower-level implementations.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 14/18] swscale/graph: add new high-level scaler dispatch mechanism

2024-10-24 Thread Niklas Haas
On Thu, 24 Oct 2024 13:08:36 +0200 Anton Khirnov  wrote:
> Quoting Niklas Haas (2024-10-24 12:02:41)
> > On Thu, 24 Oct 2024 11:30:12 +0200 Anton Khirnov  wrote:
> > > Does this (or can it) support copy-free passthrough of individual
> > > planes, for cases like YUV420P<->NV12?
> >
> > Not currently, no. We could switch to AVBufferRefs for the plane pointers to
> > add this functionality down the line, but it's not a high priority because
> > doing this will require the much harder problem of rewriting the underlying
> > scaler dispatch logic to begin with.
> >
> > Doing this would not be terribly difficult either way, but the problem is 
> > that
> > swscale currently does not exactly have a good concept of what's happening
> > to each plane - it's all a jumble of ad-hoc cases.
> >
> > One of my plans for SwsGraph is to first make a list of operations to 
> > perform
> > on each plane, and then eliminate reduntant passes to figure out what 
> > special
> > cases and/or noop passes can be optimized. But this has to wait a bit, as 
> > I'm
> > first working on the immediate goal of adding support for more complex
> > colorspaces (by chaining together multiple scaling passes).
>
> Right, as long as it's reasonably implementable down the line I have no
> objections. My concern was mainly about locking ourselves into a
> high-level API that does not allow this, and this constraint then
> propagating into the lower-level implementations.

The public API explicitly documents this behavior. The idea is for users to
call `sws_scale_frame` on a dst frame that has no data pointers allocated.

>
> --
> Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 3/3] avformat/movenc: add support for fragmented TTML muxing

2024-10-24 Thread Jan Ekström
From: Jan Ekström 

Attempts to base the fragmentation timing on other streams
as most receivers expect media fragments to be more or less
aligned.

Currently does not support fragmentation on subtitle track
only, as the subtitle packet queue timings would have to be
checked in addition to the current fragmentation timing logic.

Signed-off-by: Jan Ekström 
---
 libavformat/movenc.c|   9 -
 libavformat/movenc_ttml.c   | 157 ++-
 tests/fate/mov.mak  |  21 +
 tests/ref/fate/mov-mp4-fragmented-ttml-dfxp | 430 
 tests/ref/fate/mov-mp4-fragmented-ttml-stpp | 430 
 5 files changed, 1034 insertions(+), 13 deletions(-)
 create mode 100644 tests/ref/fate/mov-mp4-fragmented-ttml-dfxp
 create mode 100644 tests/ref/fate/mov-mp4-fragmented-ttml-stpp

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index 73cc6f5845..3464ea03a4 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -8020,15 +8020,6 @@ static int mov_init(AVFormatContext *s)
 track->squash_fragment_samples_to_one =
 ff_is_ttml_stream_paragraph_based(track->par);
 
-if (mov->flags & FF_MOV_FLAG_FRAGMENT &&
-track->squash_fragment_samples_to_one) {
-av_log(s, AV_LOG_ERROR,
-   "Fragmentation is not currently supported for "
-   "TTML in MP4/ISMV (track synchronization between "
-   "subtitles and other media is not yet 
implemented)!\n");
-return AVERROR_PATCHWELCOME;
-}
-
 if (track->mode != MODE_ISM &&
 track->par->codec_tag == MOV_ISMV_TTML_TAG &&
 s->strict_std_compliance > FF_COMPLIANCE_UNOFFICIAL) {
diff --git a/libavformat/movenc_ttml.c b/libavformat/movenc_ttml.c
index 413eccfc0f..3e1ef053cf 100644
--- a/libavformat/movenc_ttml.c
+++ b/libavformat/movenc_ttml.c
@@ -55,6 +55,50 @@ static int mov_init_ttml_writer(MOVTrack *track, 
AVFormatContext **out_ctx)
 return 0;
 }
 
+static void mov_calculate_start_and_end_of_other_tracks(
+AVFormatContext *s, MOVTrack *track, int64_t *start_pts, int64_t *end_pts)
+{
+MOVMuxContext *mov = s->priv_data;
+
+// Initialize at the end of the previous document/fragment, which is NOPTS
+// until the first fragment is created.
+int64_t max_track_end_dts = *start_pts = track->end_pts;
+
+for (unsigned int i = 0; i < s->nb_streams; i++) {
+MOVTrack *other_track = &mov->tracks[i];
+
+// Skip our own track, any other track that needs squashing,
+// or any track which still has its start_dts at NOPTS or
+// any track that did not yet get any packets.
+if (track == other_track ||
+other_track->squash_fragment_samples_to_one ||
+other_track->start_dts == AV_NOPTS_VALUE ||
+!other_track->entry) {
+continue;
+}
+
+{
+int64_t picked_start = 
av_rescale_q_rnd(other_track->cluster[0].dts + other_track->cluster[0].cts,
+other_track->st->time_base,
+track->st->time_base,
+AV_ROUND_NEAR_INF | 
AV_ROUND_PASS_MINMAX);
+int64_t picked_end   = av_rescale_q_rnd(other_track->end_pts,
+other_track->st->time_base,
+track->st->time_base,
+AV_ROUND_NEAR_INF | 
AV_ROUND_PASS_MINMAX);
+
+if (*start_pts == AV_NOPTS_VALUE)
+*start_pts = picked_start;
+else if (picked_start >= track->end_pts)
+*start_pts = FFMIN(*start_pts, picked_start);
+
+max_track_end_dts = FFMAX(max_track_end_dts, picked_end);
+}
+}
+
+*end_pts = max_track_end_dts;
+}
+
 static int mov_write_ttml_document_from_queue(AVFormatContext *s,
   AVFormatContext *ttml_ctx,
   MOVTrack *track,
@@ -66,13 +110,85 @@ static int 
mov_write_ttml_document_from_queue(AVFormatContext *s,
 int64_t start_ts = track->start_dts == AV_NOPTS_VALUE ?
0 : (track->start_dts + track->track_duration);
 int64_t end_ts   = start_ts;
+unsigned int time_limited = 0;
+PacketList back_to_queue_list = { 0 };
+
+if (*out_start_ts != AV_NOPTS_VALUE) {
+// we have non-nopts values here, thus we have been given a time range
+time_limited = 1;
+start_ts = *out_start_ts;
+end_ts   = *out_start_ts + *out_duration;
+}
 
 if ((ret = avformat_write_header(ttml_ctx, NULL)) < 0) {
 return ret;
 }
 
 while (!avpriv_packet_l

[FFmpeg-devel] [PATCH v4 2/3] avcodec/avpacket: add functionality to prepend to AVPacketLists

2024-10-24 Thread Jan Ekström
From: Jan Ekström 

Signed-off-by: Jan Ekström 
---
 libavcodec/packet.c  | 20 +++-
 libavcodec/packet_internal.h |  2 ++
 2 files changed, 17 insertions(+), 5 deletions(-)

diff --git a/libavcodec/packet.c b/libavcodec/packet.c
index 381001fd65..a6302340bb 100644
--- a/libavcodec/packet.c
+++ b/libavcodec/packet.c
@@ -546,6 +546,7 @@ int avpriv_packet_list_put(PacketList *packet_buffer,
int flags)
 {
 PacketListEntry *pktl = av_malloc(sizeof(*pktl));
+unsigned int update_end_point = 1;
 int ret;
 
 if (!pktl)
@@ -569,13 +570,22 @@ int avpriv_packet_list_put(PacketList *packet_buffer,
 
 pktl->next = NULL;
 
-if (packet_buffer->head)
-packet_buffer->tail->next = pktl;
-else
+if (packet_buffer->head) {
+if (flags & FF_PACKETLIST_FLAG_PREPEND) {
+pktl->next = packet_buffer->head;
+packet_buffer->head = pktl;
+update_end_point = 0;
+} else {
+packet_buffer->tail->next = pktl;
+}
+} else
 packet_buffer->head = pktl;
 
-/* Add the packet in the buffered packet list. */
-packet_buffer->tail = pktl;
+if (update_end_point) {
+/* Add the packet in the buffered packet list. */
+packet_buffer->tail = pktl;
+}
+
 return 0;
 }
 
diff --git a/libavcodec/packet_internal.h b/libavcodec/packet_internal.h
index 52fa6d9be9..9c0f4fead5 100644
--- a/libavcodec/packet_internal.h
+++ b/libavcodec/packet_internal.h
@@ -34,6 +34,8 @@ typedef struct PacketList {
 PacketListEntry *head, *tail;
 } PacketList;
 
+#define FF_PACKETLIST_FLAG_PREPEND (1 << 0) /**< Prepend created AVPacketList 
instead of appending */
+
 /**
  * Append an AVPacket to the list.
  *
-- 
2.47.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 3/3] avcodec/ffv1: Implement new slice tiling

2024-10-24 Thread Michael Niedermayer
On Tue, Oct 01, 2024 at 10:31:26PM +0200, Michael Niedermayer wrote:
> This fixes corner cases (requires version 4 or a spec update)
> 
> Fixes: Ticket5548
> 
> Sponsored-by: Sovereign Tech Fund
> Signed-off-by: Michael Niedermayer 
> ---
>  libavcodec/ffv1.c| 21 +
>  libavcodec/ffv1.h|  1 +
>  libavcodec/ffv1dec.c |  8 
>  libavcodec/ffv1enc.c |  2 +-
>  4 files changed, 23 insertions(+), 9 deletions(-)

will apply with some more comments/documentation in the code and a bugfix

[...]
-- 
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Those who are best at talking, realize last or never when they are wrong.


signature.asc
Description: PGP signature
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] Add support for LJ92 compressed MLV files, attempt 02

2024-10-24 Thread South East
ffmpeg has existing support for MLV raw video files; libavformat/mlvdec.c.
Since this was added, MLV has been extended to support LJ92 compressed
image data.  These patches build on lossless DNG support in 7.2-dev to
enable handling LJ92 MLV via existing libavcodec/mjpegdec.c code.

Sample LJ92 MLV file:
https://0x0.st/XlFm.mlv

FATE tests pass locally.  It was not obvious from the Developer guide how
to get this working.  The claim is "Running ’make fate’ accomplishes
[running local FATE tests]", but this of course does not work until
you rsync the samples, build FATE etc, which is not mentioned:
https://www.ffmpeg.org/developer.html#Regression-tests-1

I suggest adding something like "but WILL NOT work until it is configured",
before "please see fate.html for details".  This would make it clearer
that you NEED to follow the link, rather than it existing as reference
material should you want it in the future.  Simply running "make fate"
fails, but in a way that looked like a normal test failure to me;
I assumed upstream was broken since the failure was repeatable and
occured without my changes.

There is a long comment in patch 0001 that I don't expect to survive review,
but provides context for the code change.  As an ffmpeg noob I can't
make a good judgement on what level of commenting is appropriate.

For reference, attempt 1 can be seen in the archives, here:
https://ffmpeg.org//pipermail/ffmpeg-devel/2024-October/335245.html
From f61171a37c81bc5b7f7ba6075b38e96df2cdd371 Mon Sep 17 00:00:00 2001
From: stephen-e <33672591+reticulatedpi...@users.noreply.github.com>
Date: Thu, 24 Oct 2024 18:16:18 +0100
Subject: [PATCH 1/2] avcodec/mjpegdec: set bayer earlier if possible

Use av_pix_fmt_desc_get() earlier, to set Bayer flag
during mjpeg decoding, earlier than before.

dng_decode_jpeg() does this directly in tiff.c,
this change allows signalling via the pixel format
from a demuxer (to support LJ92 compressed raw video
in MLV containers, which is the next commit).
---
 libavcodec/mjpegdec.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/libavcodec/mjpegdec.c b/libavcodec/mjpegdec.c
index 86ec58713c..442a9878e1 100644
--- a/libavcodec/mjpegdec.c
+++ b/libavcodec/mjpegdec.c
@@ -508,6 +508,20 @@ int ff_mjpeg_decode_sof(MJpegDecodeContext *s)
 }
 }
 
+// Not all files can give us this pix_fmt info at this point.
+// For those that can, we can set Bayer flag early.
+// Without this, at least LJ92 compressed MLV files fail to parse further down,
+// because they're Bayer but the flag isn't set at time of check.
+// Possibly I'm supposed to extend the above wall of flag checks and bit ops,
+// which has no comments on intent or purpose?  Instead, I call
+// av_pix_fmt_desc_get(), but notably, this is called again further down.
+// Moving it up here makes FATE fail because the switch below this point changes
+// s->avctx->pix_fmt used in the call.  Duplicating it here works, although this
+// feels ugly.
+s->pix_desc = av_pix_fmt_desc_get(s->avctx->pix_fmt);
+if (s->pix_desc && (s->pix_desc->flags & AV_PIX_FMT_FLAG_BAYER))
+s->bayer = 1;
+
 if (s->bayer) {
 if (pix_fmt_id != 0x && pix_fmt_id != 0x1100)
 goto unk_pixfmt;
-- 
2.45.2

From db0f076e8f724deee604af7a1a85c1d130f1f87d Mon Sep 17 00:00:00 2001
From: stephen-e <33672591+reticulatedpi...@users.noreply.github.com>
Date: Mon, 21 Oct 2024 16:35:49 +0100
Subject: [PATCH 2/2] avformat/mlvdec: add LJ92 support

MLV files can contain LJ92 compressed raw video data.
MJPEG codec can be used to handle these.
---
 libavformat/mlvdec.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/libavformat/mlvdec.c b/libavformat/mlvdec.c
index 1a6d38f37c..2eb21a3aab 100644
--- a/libavformat/mlvdec.c
+++ b/libavformat/mlvdec.c
@@ -44,8 +44,9 @@
 
 #define MLV_AUDIO_CLASS_WAV  1
 
-#define MLV_CLASS_FLAG_DELTA 0x40
 #define MLV_CLASS_FLAG_LZMA  0x80
+#define MLV_CLASS_FLAG_DELTA 0x40
+#define MLV_CLASS_FLAG_LJ92  0x20
 
 typedef struct {
 AVIOContext *pb[101];
@@ -298,9 +299,12 @@ static int read_header(AVFormatContext *avctx)
 if ((mlv->class[0] & (MLV_CLASS_FLAG_DELTA|MLV_CLASS_FLAG_LZMA)))
 avpriv_request_sample(avctx, "compression");
 vst->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
-switch (mlv->class[0] & ~(MLV_CLASS_FLAG_DELTA|MLV_CLASS_FLAG_LZMA)) {
+switch (mlv->class[0] & ~(MLV_CLASS_FLAG_DELTA|MLV_CLASS_FLAG_LZMA|MLV_CLASS_FLAG_LJ92)) {
 case MLV_VIDEO_CLASS_RAW:
-vst->codecpar->codec_id = AV_CODEC_ID_RAWVIDEO;
+if (mlv->class[0] & MLV_CLASS_FLAG_LJ92)
+vst->codecpar->codec_id = AV_CODEC_ID_MJPEG;
+else
+vst->codecpar->codec_id = AV_CODEC_ID_RAWVIDEO;
 break;
 case MLV_VIDEO_CLASS_YUV:
 vst->codec

[FFmpeg-devel] [PATCH v4 00/13] major refactor and new scaling API

2024-10-24 Thread Niklas Haas
Changes since v3:
- Make SwsInternal a superset of SwsContext, instead of a separate struct
- Fix minor bug in the calculation of SWS_STRICT

I overall prefer this version, it simplifies things and allows us to split
apart the cosmetic and non-cosmetic commits very cleanly.

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 11/13] tests/swscale: rewrite on top of new API

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

This rewrite cleans up the code to use AVFrames and the new swscale API. The
log format has also been simplified and expanded to account for the new
options. (Not yet implemented)

The self testing code path has also been expanded to test the new swscale
implementation against the old one, to serve as an unchanging reference. This
does not accomplish much yet, but serves as a framework for future work.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/tests/swscale.c | 665 -
 1 file changed, 284 insertions(+), 381 deletions(-)

diff --git a/libswscale/tests/swscale.c b/libswscale/tests/swscale.c
index af8069f728..c11a46024e 100644
--- a/libswscale/tests/swscale.c
+++ b/libswscale/tests/swscale.c
@@ -1,4 +1,5 @@
 /*
+ * Copyright (C) 2024  Nikles Haas
  * Copyright (C) 2003-2011 Michael Niedermayer 
  *
  * This file is part of FFmpeg.
@@ -26,424 +27,307 @@
 
 #undef HAVE_AV_CONFIG_H
 #include "libavutil/cpu.h"
-#include "libavutil/imgutils.h"
-#include "libavutil/mem.h"
-#include "libavutil/avutil.h"
-#include "libavutil/crc.h"
-#include "libavutil/opt.h"
 #include "libavutil/pixdesc.h"
 #include "libavutil/lfg.h"
 #include "libavutil/sfc64.h"
+#include "libavutil/frame.h"
+#include "libavutil/pixfmt.h"
+#include "libavutil/avassert.h"
+#include "libavutil/macros.h"
 
 #include "libswscale/swscale.h"
 
-/* HACK Duplicated from swscale_internal.h.
- * Should be removed when a cleaner pixel format system exists. */
-#define isGray(x)  \
-((x) == AV_PIX_FMT_GRAY8   || \
- (x) == AV_PIX_FMT_YA8 || \
- (x) == AV_PIX_FMT_GRAY16BE|| \
- (x) == AV_PIX_FMT_GRAY16LE|| \
- (x) == AV_PIX_FMT_YA16BE  || \
- (x) == AV_PIX_FMT_YA16LE)
-#define hasChroma(x)   \
-(!(isGray(x)|| \
-   (x) == AV_PIX_FMT_MONOBLACK || \
-   (x) == AV_PIX_FMT_MONOWHITE))
-
-static av_always_inline int isALPHA(enum AVPixelFormat pix_fmt)
-{
-const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt);
-return desc->flags & AV_PIX_FMT_FLAG_ALPHA;
-}
+enum {
+WIDTH  = 96,
+HEIGHT = 96,
+};
 
-static double prob = 1;
-FFSFC64 prng_state;
+struct options {
+enum AVPixelFormat src_fmt;
+enum AVPixelFormat dst_fmt;
+double prob;
+};
 
-static uint64_t getSSD(const uint8_t *src1, const uint8_t *src2,
-   int stride1, int stride2, int w, int h)
-{
-int x, y;
-uint64_t ssd = 0;
+struct mode {
+SwsFlags flags;
+SwsDither dither;
+};
 
-for (y = 0; y < h; y++) {
-for (x = 0; x < w; x++) {
-int d = src1[x + y * stride1] - src2[x + y * stride2];
-ssd += d * d;
-}
-}
-return ssd;
-}
+const int dst_w[] = { WIDTH,  WIDTH  - WIDTH  / 3, WIDTH  + WIDTH  / 3 };
+const int dst_h[] = { HEIGHT, HEIGHT - HEIGHT / 3, HEIGHT + HEIGHT / 3 };
+
+const struct mode modes[] = {
+{ SWS_FAST_BILINEAR },
+{ SWS_BILINEAR },
+{ SWS_BICUBIC },
+{ SWS_X | SWS_BITEXACT },
+{ SWS_POINT },
+{ SWS_AREA | SWS_ACCURATE_RND },
+{ SWS_BICUBIC | SWS_FULL_CHR_H_INT | SWS_FULL_CHR_H_INP },
+{0}, // test defaults
+};
 
-static uint64_t getSSD0(int ref, const uint8_t *src1, int stride1,
-int w, int h)
-{
-int x, y;
-uint64_t ssd = 0;
+static FFSFC64 prng_state;
+static SwsContext *sws[3]; /* reused between tests for efficiency */
 
-for (y = 0; y < h; y++) {
-for (x = 0; x < w; x++) {
-int d = src1[x + y * stride1] - ref;
-ssd += d * d;
-}
-}
-return ssd;
+static int fmt_comps(enum AVPixelFormat fmt)
+{
+const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(fmt);
+int comps = desc->nb_components >= 3 ? 0b111 : 0b1;
+if (desc->flags & AV_PIX_FMT_FLAG_ALPHA)
+comps |= 0b1000;
+return comps;
 }
 
-struct Results {
-uint64_t ssdY;
-uint64_t ssdU;
-uint64_t ssdV;
-uint64_t ssdA;
-uint32_t crc;
-};
-
-// test by ref -> src -> dst -> out & compare out against ref
-// ref & out are YV12
-static int doTest(const uint8_t * const ref[4], int refStride[4], int w, int h,
-  enum AVPixelFormat srcFormat, enum AVPixelFormat dstFormat,
-  int srcW, int srcH, int dstW, int dstH, int flags,
-  struct Results *r)
+static void get_mse(int mse[4], const AVFrame *a, const AVFrame *b, int comps)
 {
-const AVPixFmtDescriptor *desc_yuva420p = 
av_pix_fmt_desc_get(AV_PIX_FMT_YUVA420P);
-const AVPixFmtDescriptor *desc_src  = av_pix_fmt_desc_get(srcFormat);
-const AVPixFmtDescriptor *desc_dst  = av_pix_fmt_desc_get(dstFormat);
-static enum AVPixelFormat cur_srcFormat;
-static int cur_srcW, cur_srcH;
-static const uint8_t *src[4];
-static int srcStride[4];
-uint8_t *dst[4] = { 0 };
-uint8_t *out[4] = { 0 };
-int dstStride[4] = {0};

[FFmpeg-devel] [PATCH v4 05/13] swscale: expose SwsContext publicly

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

Following in the footsteps of the work in the previous commit, it's now
relatively straightforward to expose the options struct publicly as
SwsContext. This is a step towards making this more user friendly, as
well as following API conventions established elsewhere.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/swscale.h  |  93 +++--
 libswscale/swscale_internal.h |  45 +
 libswscale/utils.c| 123 +++---
 3 files changed, 144 insertions(+), 117 deletions(-)

diff --git a/libswscale/swscale.h b/libswscale/swscale.h
index 50c705ae06..4baef532b6 100644
--- a/libswscale/swscale.h
+++ b/libswscale/swscale.h
@@ -42,8 +42,6 @@
 #include "version.h"
 #endif
 
-typedef struct SwsContext SwsContext;
-
 /**
  * @defgroup libsws libswscale
  * Color conversion and scaling library.
@@ -65,17 +63,98 @@ const char *swscale_configuration(void);
 const char *swscale_license(void);
 
 /**
- * Get the AVClass for swsContext. It can be used in combination with
+ * Get the AVClass for SwsContext. It can be used in combination with
  * AV_OPT_SEARCH_FAKE_OBJ for examining options.
  *
  * @see av_opt_find().
  */
 const AVClass *sws_get_class(void);
 
-/**
- * Allocate an empty SwsContext. This must be filled and passed to
- * sws_init_context(). For filling see AVOptions, options.c and
- * sws_setColorspaceDetails().
+/**
+ * Flags and quality settings *
+ **/
+
+typedef enum SwsDither {
+SWS_DITHER_NONE = 0, /* disable dithering */
+SWS_DITHER_AUTO, /* auto-select from preset */
+SWS_DITHER_BAYER,/* ordered dither matrix */
+SWS_DITHER_ED,   /* error diffusion */
+SWS_DITHER_A_DITHER, /* arithmetic addition */
+SWS_DITHER_X_DITHER, /* arithmetic xor */
+SWS_DITHER_NB,   /* not part of the ABI */
+} SwsDither;
+
+typedef enum SwsAlphaBlend {
+SWS_ALPHA_BLEND_NONE = 0,
+SWS_ALPHA_BLEND_UNIFORM,
+SWS_ALPHA_BLEND_CHECKERBOARD,
+SWS_ALPHA_BLEND_NB,  /* not part of the ABI */
+} SwsAlphaBlend;
+
+/***
+ * Context creation and management *
+ ***/
+
+/**
+ * Main external API structure. New fields can be added to the end with
+ * minor version bumps. Removal, reordering and changes to existing fields
+ * require a major version bump. sizeof(SwsContext) is not part of the ABI.
+ */
+typedef struct SwsContext {
+const AVClass *av_class;
+
+/**
+ * Private data of the user, can be used to carry app specific stuff.
+ */
+void *opaque;
+
+/**
+ * Bitmask of SWS_*.
+ */
+unsigned flags;
+
+/**
+ * Extra parameters for fine-tuning certain scalers.
+ */
+double scaler_params[2];
+
+/**
+ * How many threads to use for processing, or 0 for automatic selection.
+ */
+int threads;
+
+/**
+ * Dither mode.
+ */
+SwsDither dither;
+
+/**
+ * Alpha blending mode. See `SwsAlphaBlend` for details.
+ */
+SwsAlphaBlend alpha_blend;
+
+/**
+ * Use gamma correct scaling.
+ */
+int gamma_flag;
+
+/**
+ * Frame property overrides.
+ */
+int src_w, src_h;  ///< Width and height of the source frame
+int dst_w, dst_h;  ///< Width and height of the destination frame
+int src_format;///< Source pixel format
+int dst_format;///< Destination pixel format
+int src_range; ///< Source is full range
+int dst_range; ///< Destination is full range
+int src_v_chr_pos; ///< Source vertical chroma position in luma grid / 256
+int src_h_chr_pos; ///< Source horizontal chroma position
+int dst_v_chr_pos; ///< Destination vertical chroma position
+int dst_h_chr_pos; ///< Destination horizontal chroma position
+} SwsContext;
+
+/**
+ * Allocate an empty SwsContext and set its fields to default values.
  */
 SwsContext *sws_alloc_context(void);
 
diff --git a/libswscale/swscale_internal.h b/libswscale/swscale_internal.h
index d459b79af3..12fa406e2c 100644
--- a/libswscale/swscale_internal.h
+++ b/libswscale/swscale_internal.h
@@ -73,23 +73,6 @@ static inline SwsInternal *sws_internal(const SwsContext 
*sws)
 return (SwsInternal *) sws;
 }
 
-typedef enum SwsDither {
-SWS_DITHER_NONE = 0,
-SWS_DITHER_AUTO,
-SWS_DITHER_BAYER,
-SWS_DITHER_ED,
-SWS_DITHER_A_DITHER,
-SWS_DITHER_X_DITHER,
-SWS_DITHER_NB,
-} SwsDither;
-
-typedef enum SwsAlphaBlend {
-SWS_ALPHA_BLEND_NONE  = 0,
-SWS_ALPHA_BLEND_UNIFORM,
-SWS_ALPHA_BLEND_CHECKERBOARD,
-SWS_ALPHA_BLEND_NB,
-} SwsAlphaBlend;
-
 typedef struct Range {
 unsigned int start;
 unsigned int len;
@@ -329,32 +312,10 @@ struct SwsFilterDescriptor;
 
 /* This struct should be aligned on at least a 32-byte boundary. */
 struct SwsInternal {
-/* Currently active user-facing options. */
-struct {
-   

[FFmpeg-devel] [PATCH v4 13/13] avfilter/vf_scale: switch to new swscale API

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

Most logic from this filter has been co-opted into swscale itself,
allowing the resulting filter to be substantially simpler as it no
longer has to worry about context initialization, interlacing, etc.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libavfilter/vf_scale.c | 354 +
 1 file changed, 72 insertions(+), 282 deletions(-)

diff --git a/libavfilter/vf_scale.c b/libavfilter/vf_scale.c
index a89ebe8c47..4afad8d958 100644
--- a/libavfilter/vf_scale.c
+++ b/libavfilter/vf_scale.c
@@ -31,6 +31,7 @@
 #include "filters.h"
 #include "formats.h"
 #include "framesync.h"
+#include "libavutil/pixfmt.h"
 #include "scale_eval.h"
 #include "video.h"
 #include "libavutil/eval.h"
@@ -131,10 +132,7 @@ enum EvalMode {
 
 typedef struct ScaleContext {
 const AVClass *class;
-struct SwsContext *sws; ///< software scaler context
-struct SwsContext *isws[2]; ///< software scaler context for interlaced 
material
-// context used for forwarding options to sws
-struct SwsContext *sws_opts;
+SwsContext *sws;
 FFFrameSync fs;
 
 /**
@@ -149,8 +147,6 @@ typedef struct ScaleContext {
 
 int hsub, vsub; ///< chroma subsampling
 int slice_y;///< top of current output slice
-int input_is_pal;   ///< set to 1 if the input format is paletted
-int output_is_pal;  ///< set to 1 if the output format is paletted
 int interlaced;
 int uses_ref;
 
@@ -170,10 +166,6 @@ typedef struct ScaleContext {
 
 int in_chroma_loc;
 int out_chroma_loc;
-int out_h_chr_pos;
-int out_v_chr_pos;
-int in_h_chr_pos;
-int in_v_chr_pos;
 
 int force_original_aspect_ratio;
 int force_divisible_by;
@@ -334,40 +326,24 @@ revert:
 static av_cold int preinit(AVFilterContext *ctx)
 {
 ScaleContext *scale = ctx->priv;
-int ret;
 
-scale->sws_opts = sws_alloc_context();
-if (!scale->sws_opts)
+scale->sws = sws_alloc_context();
+if (!scale->sws)
 return AVERROR(ENOMEM);
 
 // set threads=0, so we can later check whether the user modified it
-ret = av_opt_set_int(scale->sws_opts, "threads", 0, 0);
-if (ret < 0)
-return ret;
+scale->sws->threads = 0;
 
 ff_framesync_preinit(&scale->fs);
 
 return 0;
 }
 
-static const int sws_colorspaces[] = {
-AVCOL_SPC_UNSPECIFIED,
-AVCOL_SPC_RGB,
-AVCOL_SPC_BT709,
-AVCOL_SPC_BT470BG,
-AVCOL_SPC_SMPTE170M,
-AVCOL_SPC_FCC,
-AVCOL_SPC_SMPTE240M,
-AVCOL_SPC_BT2020_NCL,
--1
-};
-
 static int do_scale(FFFrameSync *fs);
 
 static av_cold int init(AVFilterContext *ctx)
 {
 ScaleContext *scale = ctx->priv;
-int64_t threads;
 int ret;
 
 if (ctx->filter == &ff_vf_scale2ref)
@@ -407,14 +383,13 @@ static av_cold int init(AVFilterContext *ctx)
 if (ret < 0)
 return ret;
 
-if (scale->in_color_matrix != -1 &&
-!ff_fmt_is_in(scale->in_color_matrix, sws_colorspaces)) {
+if (scale->in_color_matrix != -1 && 
!sws_test_colorspace(scale->in_color_matrix, 0)) {
 av_log(ctx, AV_LOG_ERROR, "Unsupported input color matrix '%s'\n",
av_color_space_name(scale->in_color_matrix));
 return AVERROR(EINVAL);
 }
 
-if (!ff_fmt_is_in(scale->out_color_matrix, sws_colorspaces)) {
+if (scale->out_color_matrix != -1 && 
!sws_test_colorspace(scale->out_color_matrix, 1)) {
 av_log(ctx, AV_LOG_ERROR, "Unsupported output color matrix '%s'\n",
av_color_space_name(scale->out_color_matrix));
 return AVERROR(EINVAL);
@@ -424,25 +399,18 @@ static av_cold int init(AVFilterContext *ctx)
scale->w_expr, scale->h_expr, (char 
*)av_x_if_null(scale->flags_str, ""), scale->interlaced);
 
 if (scale->flags_str && *scale->flags_str) {
-ret = av_opt_set(scale->sws_opts, "sws_flags", scale->flags_str, 0);
+ret = av_opt_set(scale->sws, "sws_flags", scale->flags_str, 0);
 if (ret < 0)
 return ret;
 }
 
 for (int i = 0; i < FF_ARRAY_ELEMS(scale->param); i++)
-if (scale->param[i] != DBL_MAX) {
-ret = av_opt_set_double(scale->sws_opts, i ? "param1" : "param0",
-scale->param[i], 0);
-if (ret < 0)
-return ret;
-}
+if (scale->param[i] != DBL_MAX)
+scale->sws->scaler_params[i] = scale->param[i];
 
 // use generic thread-count if the user did not set it explicitly
-ret = av_opt_get_int(scale->sws_opts, "threads", 0, &threads);
-if (ret < 0)
-return ret;
-if (!threads)
-av_opt_set_int(scale->sws_opts, "threads", 
ff_filter_get_nb_threads(ctx), 0);
+if (!scale->sws->threads)
+scale->sws->threads = ff_filter_get_nb_threads(ctx);
 
 if (ctx->filter != &ff_vf_scale2ref && scale->uses_ref) {
 AVFilterPad pad = {
@@ -464,11 +432,7 @@ static av_cold void uni

[FFmpeg-devel] [PATCH v4 03/13] swscale/internal: use static_assert for enforcing offsets

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

Instead of sprinkling av_assert0 into random init functions.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/swscale_internal.h | 11 +++
 libswscale/utils.c|  2 --
 libswscale/x86/swscale.c  |  4 
 3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/libswscale/swscale_internal.h b/libswscale/swscale_internal.h
index b8820ea0ba..6b85ecadae 100644
--- a/libswscale/swscale_internal.h
+++ b/libswscale/swscale_internal.h
@@ -22,6 +22,7 @@
 #define SWSCALE_SWSCALE_INTERNAL_H
 
 #include 
+#include 
 
 #include "config.h"
 #include "swscale.h"
@@ -705,6 +706,16 @@ struct SwsInternal {
 };
 //FIXME check init (where 0)
 
+static_assert(offsetof(SwsInternal, redDither) + DITHER32_INT == 
offsetof(SwsInternal, dither32),
+  "dither32 must be at the same offset as redDither + 
DITHER32_INT");
+
+#if ARCH_X86_64
+/* x86 yuv2gbrp uses the SwsInternal for yuv coefficients
+   if struct offsets change the asm needs to be updated too */
+static_assert(offsetof(SwsInternal, yuv2rgb_y_offset) == 40292,
+  "yuv2rgb_y_offset must be updated in x86 asm");
+#endif
+
 SwsFunc ff_yuv2rgb_get_func_ptr(SwsInternal *c);
 int ff_yuv2rgb_c_init_tables(SwsInternal *c, const int inv_table[4],
  int fullRange, int brightness,
diff --git a/libswscale/utils.c b/libswscale/utils.c
index 31c136ad15..87591bdabd 100644
--- a/libswscale/utils.c
+++ b/libswscale/utils.c
@@ -1234,8 +1234,6 @@ SwsContext *sws_alloc_context(void)
 {
 SwsInternal *c = av_mallocz(sizeof(SwsInternal));
 
-av_assert0(offsetof(SwsInternal, redDither) + DITHER32_INT == 
offsetof(SwsInternal, dither32));
-
 if (c) {
 c->av_class = &ff_sws_context_class;
 av_opt_set_defaults(c);
diff --git a/libswscale/x86/swscale.c b/libswscale/x86/swscale.c
index 16182124c0..48f0aea3f2 100644
--- a/libswscale/x86/swscale.c
+++ b/libswscale/x86/swscale.c
@@ -792,10 +792,6 @@ switch(c->dstBpc){ \
 
 if(c->flags & SWS_FULL_CHR_H_INT) {
 
-/* yuv2gbrp uses the SwsInternal for yuv coefficients
-   if struct offsets change the asm needs to be updated too */
-av_assert0(offsetof(SwsInternal, yuv2rgb_y_offset) == 40292);
-
 #define YUV2ANYX_FUNC_CASE(fmt, name, opt)  \
 case fmt:   \
 c->yuv2anyX = ff_yuv2##name##_full_X_##opt; \
-- 
2.46.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 08/13] swscale/internal: expose sws_init_single_context() internally

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

Used by the graph API swscale wrapper, for now.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/swscale_internal.h | 3 +++
 libswscale/utils.c| 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/libswscale/swscale_internal.h b/libswscale/swscale_internal.h
index 12fa406e2c..a24b49660a 100644
--- a/libswscale/swscale_internal.h
+++ b/libswscale/swscale_internal.h
@@ -957,6 +957,9 @@ extern const int32_t ff_yuv2rgb_coeffs[11][4];
 
 extern const AVClass ff_sws_context_class;
 
+int sws_init_single_context(SwsContext *sws, SwsFilter *srcFilter,
+SwsFilter *dstFilter);
+
 /**
  * Set c->convert_unscaled to an unscaled converter if one exists for the
  * specific source and destination formats, bit depths, flags, etc.
diff --git a/libswscale/utils.c b/libswscale/utils.c
index fb62e2e8e7..6fda7e6e51 100644
--- a/libswscale/utils.c
+++ b/libswscale/utils.c
@@ -1318,8 +1318,8 @@ static enum AVPixelFormat alphaless_fmt(enum 
AVPixelFormat fmt)
 }
 }
 
-static av_cold int sws_init_single_context(SwsContext *sws, SwsFilter 
*srcFilter,
-   SwsFilter *dstFilter)
+av_cold int sws_init_single_context(SwsContext *sws, SwsFilter *srcFilter,
+SwsFilter *dstFilter)
 {
 int i;
 int usesVFilter, usesHFilter;
-- 
2.46.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 10/13] swscale: introduce new, dynamic scaling API

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

As part of a larger, ongoing effort to modernize and partially rewrite
libswscale, it was decided and generally agreed upon to introduce a new
public API for libswscale. This API is designed to be less stateful, more
explicitly defined, and considerably easier to use than the existing one.

Most of the API work has been already accomplished in the previous commits,
this commit merely introduces the ability to use sws_scale_frame()
dynamically, without prior sws_init_context() calls. Instead, the new API
takes frame properties from the frames themselves, and the implementation is
based on the new SwsGraph API, which we simply reinitialize as needed.

This high-level wrapper also recreates the logic that used to live inside
vf_scale for scaling interlaced frames, enabling it to be reused more easily
by end users.

Finally, this function is designed to simply copy refs directly when nothing
needs to be done, substantially improving throughput of the noop fast path.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/swscale.c  | 196 --
 libswscale/swscale.h  |  89 +++
 libswscale/swscale_internal.h |   7 +-
 libswscale/utils.c|   4 +
 libswscale/x86/output.asm |   2 +-
 5 files changed, 269 insertions(+), 29 deletions(-)

diff --git a/libswscale/swscale.c b/libswscale/swscale.c
index 7b6d142d31..c4ba6a30d4 100644
--- a/libswscale/swscale.c
+++ b/libswscale/swscale.c
@@ -1209,21 +1209,205 @@ int sws_receive_slice(SwsContext *sws, unsigned int 
slice_start,
   dst, c->frame_dst->linesize, slice_start, 
slice_height);
 }
 
+static void get_frame_pointers(const AVFrame *frame, uint8_t *data[4],
+   int linesize[4], int field)
+{
+for (int i = 0; i < 4; i++) {
+data[i] = frame->data[i];
+linesize[i] = frame->linesize[i];
+}
+
+if (!(frame->flags & AV_FRAME_FLAG_INTERLACED)) {
+av_assert1(!field);
+return;
+}
+
+if (field == FIELD_BOTTOM) {
+/* Odd rows, offset by one line */
+const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
+for (int i = 0; i < 4; i++) {
+data[i] += linesize[i];
+if (desc->flags & AV_PIX_FMT_FLAG_PAL)
+break;
+}
+}
+
+/* Take only every second line */
+for (int i = 0; i < 4; i++)
+linesize[i] <<= 1;
+}
+
+/* Subset of av_frame_ref() that only references (video) data buffers */
+static int frame_ref(AVFrame *dst, const AVFrame *src)
+{
+/* ref the buffers */
+for (int i = 0; i < FF_ARRAY_ELEMS(src->buf); i++) {
+if (!src->buf[i])
+continue;
+dst->buf[i] = av_buffer_ref(src->buf[i]);
+if (!dst->buf[i])
+return AVERROR(ENOMEM);
+}
+
+memcpy(dst->data, src->data, sizeof(src->data));
+memcpy(dst->linesize, src->linesize, sizeof(src->linesize));
+return 0;
+}
+
 int sws_scale_frame(SwsContext *sws, AVFrame *dst, const AVFrame *src)
 {
 int ret;
+SwsInternal *c = sws_internal(sws);
+if (!src || !dst)
+return AVERROR(EINVAL);
+
+if (c->frame_src) {
+/* Context has been initialized with explicit values, fall back to
+ * legacy API */
+ret = sws_frame_start(sws, dst, src);
+if (ret < 0)
+return ret;
+
+ret = sws_send_slice(sws, 0, src->height);
+if (ret >= 0)
+ret = sws_receive_slice(sws, 0, dst->height);
 
-ret = sws_frame_start(sws, dst, src);
+sws_frame_end(sws);
+
+return ret;
+}
+
+ret = sws_frame_setup(sws, dst, src);
 if (ret < 0)
 return ret;
 
-ret = sws_send_slice(sws, 0, src->height);
-if (ret >= 0)
-ret = sws_receive_slice(sws, 0, dst->height);
+if (!src->data[0])
+return 0;
 
-sws_frame_end(sws);
+if (c->graph[FIELD_TOP]->noop &&
+(!c->graph[FIELD_BOTTOM] || c->graph[FIELD_BOTTOM]->noop) &&
+src->buf[0] && !dst->buf[0] && !dst->data[0])
+{
+/* Lightweight refcopy */
+ret = frame_ref(dst, src);
+if (ret < 0)
+return ret;
+} else {
+if (!dst->data[0]) {
+ret = av_frame_get_buffer(dst, 0);
+if (ret < 0)
+return ret;
+}
 
-return ret;
+for (int field = 0; field < 2; field++) {
+SwsGraph *graph = c->graph[field];
+uint8_t *dst_data[4], *src_data[4];
+int dst_linesize[4], src_linesize[4];
+get_frame_pointers(dst, dst_data, dst_linesize, field);
+get_frame_pointers(src, src_data, src_linesize, field);
+sws_graph_run(graph, dst_data, dst_linesize,
+  (const uint8_t **) src_data, src_linesize);
+if (!graph->dst.interlaced)
+break;
+}
+}
+
+return 0;
+}
+
+s

[FFmpeg-devel] [PATCH v4 12/13] tests/swscale: add a benchmarking mode

2024-10-24 Thread Niklas Haas
From: Niklas Haas 

With the ability to set the thread count as well. This benchmark includes
the constant overhead of context initialization.

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Niklas Haas 
---
 libswscale/tests/swscale.c | 93 --
 1 file changed, 68 insertions(+), 25 deletions(-)

diff --git a/libswscale/tests/swscale.c b/libswscale/tests/swscale.c
index c11a46024e..f5ad4b3132 100644
--- a/libswscale/tests/swscale.c
+++ b/libswscale/tests/swscale.c
@@ -31,21 +31,22 @@
 #include "libavutil/lfg.h"
 #include "libavutil/sfc64.h"
 #include "libavutil/frame.h"
+#include "libavutil/opt.h"
+#include "libavutil/time.h"
 #include "libavutil/pixfmt.h"
 #include "libavutil/avassert.h"
 #include "libavutil/macros.h"
 
 #include "libswscale/swscale.h"
 
-enum {
-WIDTH  = 96,
-HEIGHT = 96,
-};
-
 struct options {
 enum AVPixelFormat src_fmt;
 enum AVPixelFormat dst_fmt;
 double prob;
+int w, h;
+int threads;
+int iters;
+int bench;
 };
 
 struct mode {
@@ -53,9 +54,6 @@ struct mode {
 SwsDither dither;
 };
 
-const int dst_w[] = { WIDTH,  WIDTH  - WIDTH  / 3, WIDTH  + WIDTH  / 3 };
-const int dst_h[] = { HEIGHT, HEIGHT - HEIGHT / 3, HEIGHT + HEIGHT / 3 };
-
 const struct mode modes[] = {
 { SWS_FAST_BILINEAR },
 { SWS_BILINEAR },
@@ -114,7 +112,8 @@ static void get_mse(int mse[4], const AVFrame *a, const 
AVFrame *b, int comps)
 }
 }
 
-static int scale_legacy(AVFrame *dst, const AVFrame *src, struct mode mode)
+static int scale_legacy(AVFrame *dst, const AVFrame *src, struct mode mode,
+struct options opts)
 {
 SwsContext *sws_legacy;
 int ret;
@@ -131,23 +130,28 @@ static int scale_legacy(AVFrame *dst, const AVFrame *src, 
struct mode mode)
 sws_legacy->dst_format = dst->format;
 sws_legacy->flags  = mode.flags;
 sws_legacy->dither = mode.dither;
+sws_legacy->threads= opts.threads;
+
+if ((ret = sws_init_context(sws_legacy, NULL, NULL)) < 0)
+goto error;
 
-ret = sws_init_context(sws_legacy, NULL, NULL);
-if (!ret)
+for (int i = 0; i < opts.iters; i++)
 ret = sws_scale_frame(sws_legacy, dst, src);
 
+error:
 sws_freeContext(sws_legacy);
 return ret;
 }
 
 /* Runs a series of ref -> src -> dst -> out, and compares out vs ref */
 static int run_test(enum AVPixelFormat src_fmt, enum AVPixelFormat dst_fmt,
-int dst_w, int dst_h, struct mode mode, const AVFrame *ref,
-const int mse_ref[4])
+int dst_w, int dst_h, struct mode mode, struct options 
opts,
+const AVFrame *ref, const int mse_ref[4])
 {
 AVFrame *src = NULL, *dst = NULL, *out = NULL;
 int mse[4], mse_sws[4], ret = -1;
 const int comps = fmt_comps(src_fmt) & fmt_comps(dst_fmt);
+int64_t time, time_ref = 0;
 
 src = av_frame_alloc();
 dst = av_frame_alloc();
@@ -174,12 +178,20 @@ static int run_test(enum AVPixelFormat src_fmt, enum 
AVPixelFormat dst_fmt,
 
 sws[1]->flags  = mode.flags;
 sws[1]->dither = mode.dither;
-if (sws_scale_frame(sws[1], dst, src) < 0) {
-fprintf(stderr, "Failed %s ---> %s\n", 
av_get_pix_fmt_name(src->format),
-av_get_pix_fmt_name(dst->format));
-goto error;
+sws[1]->threads = opts.threads;
+
+time = av_gettime_relative();
+
+for (int i = 0; i < opts.iters; i++) {
+if (sws_scale_frame(sws[1], dst, src) < 0) {
+fprintf(stderr, "Failed %s ---> %s\n", 
av_get_pix_fmt_name(src->format),
+av_get_pix_fmt_name(dst->format));
+goto error;
+}
 }
 
+time = av_gettime_relative() - time;
+
 if (sws_scale_frame(sws[2], out, dst) < 0) {
 fprintf(stderr, "Failed %s ---> %s\n", 
av_get_pix_fmt_name(dst->format),
 av_get_pix_fmt_name(out->format));
@@ -196,11 +208,13 @@ static int run_test(enum AVPixelFormat src_fmt, enum 
AVPixelFormat dst_fmt,
 
 if (!mse_ref) {
 /* Compare against the legacy swscale API as a reference */
-if (scale_legacy(dst, src, mode) < 0) {
+time_ref = av_gettime_relative();
+if (scale_legacy(dst, src, mode, opts) < 0) {
 fprintf(stderr, "Failed ref %s ---> %s\n", 
av_get_pix_fmt_name(src->format),
 av_get_pix_fmt_name(dst->format));
 goto error;
 }
+time_ref = av_gettime_relative() - time_ref;
 
 if (sws_scale_frame(sws[2], out, dst) < 0)
 goto error;
@@ -221,6 +235,15 @@ static int run_test(enum AVPixelFormat src_fmt, enum 
AVPixelFormat dst_fmt,
 }
 }
 
+if (opts.bench && time_ref) {
+printf("  time=%"PRId64" us, ref=%"PRId64" us, speedup=%.3fx %s\n",
+time / opts.iters, time_ref / opts.iters,
+(double) time_ref / time,
+time <= time_ref ? "faster" : "\033[1;33mslower\033[0m");
+} else if (o

[FFmpeg-devel] [PATCH] jpeg200dec: reset in_tile_headers flag between frames

2024-10-24 Thread pal
From: Pierre-Anthony Lemieux 

---
 libavcodec/jpeg2000dec.c | 19 ++-
 libavcodec/jpeg2000dec.h |  1 -
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/libavcodec/jpeg2000dec.c b/libavcodec/jpeg2000dec.c
index 2e09b279dc..5b05ff2455 100644
--- a/libavcodec/jpeg2000dec.c
+++ b/libavcodec/jpeg2000dec.c
@@ -2402,6 +2402,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 Jpeg2000QuantStyle *qntsty  = s->qntsty;
 Jpeg2000POC *poc= &s->poc;
 uint8_t *properties = s->properties;
+uint8_t in_tile_headers = 0;
 
 for (;;) {
 int len, ret = 0;
@@ -2484,7 +2485,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_cap(s, codsty);
 break;
 case JPEG2000_COC:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "COC marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2492,7 +2493,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_coc(s, codsty, properties);
 break;
 case JPEG2000_COD:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "COD marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2500,7 +2501,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_cod(s, codsty, properties);
 break;
 case JPEG2000_RGN:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "RGN marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2512,7 +2513,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 }
 break;
 case JPEG2000_QCC:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "QCC marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2520,7 +2521,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_qcc(s, len, qntsty, properties);
 break;
 case JPEG2000_QCD:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "QCD marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2528,7 +2529,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_qcd(s, len, qntsty, properties);
 break;
 case JPEG2000_POC:
-if (s->in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
+if (in_tile_headers == 1 && s->isHT && (!s->Ccap15_b11)) {
 av_log(s->avctx, AV_LOG_ERROR,
 "POC marker found in a tile header but the codestream 
belongs to the HOMOGENEOUS set\n");
 return AVERROR_INVALIDDATA;
@@ -2536,8 +2537,8 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 ret = get_poc(s, len, poc);
 break;
 case JPEG2000_SOT:
-if (!s->in_tile_headers) {
-s->in_tile_headers = 1;
+if (!in_tile_headers) {
+in_tile_headers = 1;
 if (s->has_ppm) {
 bytestream2_init(&s->packed_headers_stream, 
s->packed_headers, s->packed_headers_size);
 }
@@ -2569,7 +2570,7 @@ static int 
jpeg2000_read_main_headers(Jpeg2000DecoderContext *s)
 break;
 case JPEG2000_PPM:
 // Packed headers, main header
-if (s->in_tile_headers) {
+if (in_tile_headers) {
 av_log(s->avctx, AV_LOG_ERROR, "PPM Marker can only be in Main 
header\n");
 return AVERROR_INVALIDDATA;
 }
diff --git a/libavcodec/jpeg2000dec.h b/libavcodec/jpeg2000dec.h
index 78eba27ed9..fce3823164 100644
--- a/libavcodec/jpeg2000dec.h
+++ b/libavcodec/jpeg2000dec.h
@@ -86,7 +86,6 @@ typedef struct Jpeg2000DecoderContext {
 uint8_t *packed_he

[FFmpeg-devel] [PATCH] Fix crash when trying to get a fragment time for a non-existing fragment

2024-10-24 Thread Eugene Zemtsov via ffmpeg-devel
From: Eugene Zemtsov 

Bug: https://issues.chromium.org/issues/372994341
Change-Id: I695d625717c078ed6f84f44e58c34da858af4d3b
Reviewed-on: 
https://chromium-review.googlesource.com/c/chromium/third_party/ffmpeg/+/5958151
Reviewed-by: Dale Curtis 
---
 libavformat/mov.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/libavformat/mov.c b/libavformat/mov.c
index b4390be44f..f213fd5b22 100644
--- a/libavformat/mov.c
+++ b/libavformat/mov.c
@@ -1672,6 +1672,8 @@ static int64_t get_frag_time(AVFormatContext *s, AVStream 
*dst_st,
 // to fragments that referenced this stream in the sidx
 if (sc->has_sidx) {
 frag_stream_info = get_frag_stream_info(frag_index, index, sc->id);
+if (!frag_stream_info)
+return AV_NOPTS_VALUE;
 if (frag_stream_info->sidx_pts != AV_NOPTS_VALUE)
 return frag_stream_info->sidx_pts;
 if (frag_stream_info->first_tfra_pts != AV_NOPTS_VALUE)
-- 
2.47.0.163.g1226f6d8fa-goog

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec/ffv1: Support slice coding mode 1 with golomb rice

2024-10-24 Thread Lynne via ffmpeg-devel

On 25/10/2024 03:57, Michael Niedermayer wrote:

Sponsored-by: Sovereign Tech Fund
Signed-off-by: Michael Niedermayer 
---
  libavcodec/ffv1dec.c  | 20 ++--
  libavcodec/ffv1dec_template.c |  3 +++
  libavcodec/ffv1enc.c  | 23 ---
  3 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/libavcodec/ffv1dec.c b/libavcodec/ffv1dec.c
index ac8cc8a31dc..de5abfe9b41 100644
--- a/libavcodec/ffv1dec.c
+++ b/libavcodec/ffv1dec.c
@@ -120,9 +120,8 @@ static int is_input_end(RangeCoder *c, GetBitContext *gb, 
int ac)
  static int decode_plane(FFV1Context *f, FFV1SliceContext *sc,
  GetBitContext *gb,
  uint8_t *src, int w, int h, int stride, int 
plane_index,
- int pixel_stride)
+int pixel_stride, int ac)
  {
-const int ac = f->ac;
  int x, y;
  int16_t *sample[2];
  sample[0] = sc->sample_buffer + 3;
@@ -273,6 +272,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
  AVFrame * const p = f->picture.f;
  const int  si = sc - f->slices;
  GetBitContext gb;
+int ac = f->ac || sc->slice_coding_mode == 1;
  
  if (!(p->flags & AV_FRAME_FLAG_KEY) && f->last_picture.f)

  ff_progress_frame_await(&f->last_picture, si);
@@ -305,7 +305,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
  x  = sc->slice_x;
  y  = sc->slice_y;
  
-if (f->ac == AC_GOLOMB_RICE) {

+if (ac == AC_GOLOMB_RICE) {
  if (f->version == 3 && f->micro_version > 1 || f->version > 3)
  get_rac(&sc->c, (uint8_t[]) { 129 });
  sc->ac_byte_count = f->version > 2 || (!x && !y) ? sc->c.bytestream - 
sc->c.bytestream_start - 1 : 0;
@@ -320,17 +320,17 @@ static int decode_slice(AVCodecContext *c, void *arg)
  const int chroma_height = AV_CEIL_RSHIFT(height, f->chroma_v_shift);
  const int cx= x >> f->chroma_h_shift;
  const int cy= y >> f->chroma_v_shift;
-decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], width, 
height, p->linesize[0], 0, 1);
+decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], width, 
height, p->linesize[0], 0, 1, ac);
  
  if (f->chroma_planes) {

-decode_plane(f, sc, &gb, p->data[1] + ps*cx+cy*p->linesize[1], 
chroma_width, chroma_height, p->linesize[1], 1, 1);
-decode_plane(f, sc, &gb, p->data[2] + ps*cx+cy*p->linesize[2], 
chroma_width, chroma_height, p->linesize[2], 1, 1);
+decode_plane(f, sc, &gb, p->data[1] + ps*cx+cy*p->linesize[1], 
chroma_width, chroma_height, p->linesize[1], 1, 1, ac);
+decode_plane(f, sc, &gb, p->data[2] + ps*cx+cy*p->linesize[2], 
chroma_width, chroma_height, p->linesize[2], 1, 1, ac);
  }
  if (f->transparency)
-decode_plane(f, sc, &gb, p->data[3] + ps*x + y*p->linesize[3], width, height, 
p->linesize[3], (f->version >= 4 && !f->chroma_planes) ? 1 : 2, 1);
+decode_plane(f, sc, &gb, p->data[3] + ps*x + y*p->linesize[3], width, height, 
p->linesize[3], (f->version >= 4 && !f->chroma_planes) ? 1 : 2, 1, ac);
  } else if (f->colorspace == 0) {
- decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], 
width, height, p->linesize[0], 0, 2);
- decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0] + 1, 
width, height, p->linesize[0], 1, 2);
+ decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], 
width, height, p->linesize[0], 0, 2, ac);
+ decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0] + 1, 
width, height, p->linesize[0], 1, 2, ac);
  } else if (f->use32bit) {
  uint8_t *planes[4] = { p->data[0] + ps * x + y * p->linesize[0],
 p->data[1] + ps * x + y * p->linesize[1],
@@ -344,7 +344,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
 p->data[3] + ps * x + y * p->linesize[3] };
  decode_rgb_frame(f, sc, &gb, planes, width, height, p->linesize);
  }
-if (f->ac != AC_GOLOMB_RICE && f->version > 2) {
+if (ac != AC_GOLOMB_RICE && f->version > 2) {
  int v;
  get_rac(&sc->c, (uint8_t[]) { 129 });
  v = sc->c.bytestream_end - sc->c.bytestream - 2 - 5*!!f->ec;
diff --git a/libavcodec/ffv1dec_template.c b/libavcodec/ffv1dec_template.c
index 2da6bd935dc..e983d1ba648 100644
--- a/libavcodec/ffv1dec_template.c
+++ b/libavcodec/ffv1dec_template.c
@@ -143,6 +143,9 @@ static int RENAME(decode_rgb_frame)(FFV1Context *f, 
FFV1SliceContext *sc,
  int transparency = f->transparency;
  int ac = f->ac;
  
+if (sc->slice_coding_mode == 1)

+ac = 1;
+
  for (x = 0; x < 4; x++) {
  sample[x][0] = RENAME(sc->sample_buffer) +  x * 2  * (w + 6) + 3;
  sample[x][1] = RENAME(sc->sample_buffer) + (x * 2 + 1) * (w + 6) + 3;
diff --git a/libavcodec/ffv1enc

[FFmpeg-devel] [PATCH 1/5] Partially revert "avcodec/h2645: allocate film grain metadata dynamically"

2024-10-24 Thread James Almer
AVFilmGrainAFGS1Params, the offending struct, is using sizeof(AVFilmGrainParams)
when it should not. This change also forgot to make the necessary changes to the
frame threading sync code.

Both of these will be fixed by the following commit.

Signed-off-by: James Almer 
---
 libavcodec/h2645_sei.c| 17 +
 libavcodec/h2645_sei.h|  2 +-
 libavcodec/hevc/hevcdec.c |  4 ++--
 3 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/libavcodec/h2645_sei.c b/libavcodec/h2645_sei.c
index 33551f5406..a481dbca2c 100644
--- a/libavcodec/h2645_sei.c
+++ b/libavcodec/h2645_sei.c
@@ -245,12 +245,7 @@ static int decode_registered_user_data(H2645SEI *h, 
GetByteContext *gb,
 
 provider_oriented_code = bytestream2_get_byteu(gb);
 if (provider_oriented_code == aom_grain_provider_oriented_code) {
-if (!h->aom_film_grain) {
-h->aom_film_grain = av_mallocz(sizeof(*h->aom_film_grain));
-if (!h->aom_film_grain)
-return AVERROR(ENOMEM);
-}
-return ff_aom_parse_film_grain_sets(h->aom_film_grain,
+return ff_aom_parse_film_grain_sets(&h->aom_film_grain,
 gb->buffer,
 
bytestream2_get_bytes_left(gb));
 }
@@ -894,11 +889,9 @@ FF_ENABLE_DEPRECATION_WARNINGS
 }
 
 #if CONFIG_HEVC_SEI
-if (sei->aom_film_grain) {
-ret = ff_aom_attach_film_grain_sets(sei->aom_film_grain, frame);
-if (ret < 0)
-return ret;
-}
+ret = ff_aom_attach_film_grain_sets(&sei->aom_film_grain, frame);
+if (ret < 0)
+return ret;
 #endif
 
 return 0;
@@ -925,7 +918,7 @@ void ff_h2645_sei_reset(H2645SEI *s)
 s->ambient_viewing_environment.present = 0;
 s->mastering_display.present = 0;
 s->content_light.present = 0;
+s->aom_film_grain.enable = 0;
 
 av_freep(&s->film_grain_characteristics);
-av_freep(&s->aom_film_grain);
 }
diff --git a/libavcodec/h2645_sei.h b/libavcodec/h2645_sei.h
index 8bcdc2bc5f..f001427e16 100644
--- a/libavcodec/h2645_sei.h
+++ b/libavcodec/h2645_sei.h
@@ -138,10 +138,10 @@ typedef struct H2645SEI {
 H2645SEIAmbientViewingEnvironment ambient_viewing_environment;
 H2645SEIMasteringDisplay mastering_display;
 H2645SEIContentLight content_light;
+AVFilmGrainAFGS1Params aom_film_grain;
 
 // Dynamic allocations due to large size.
 H2645SEIFilmGrainCharacteristics* film_grain_characteristics;
-AVFilmGrainAFGS1Params* aom_film_grain;
 } H2645SEI;
 
 enum {
diff --git a/libavcodec/hevc/hevcdec.c b/libavcodec/hevc/hevcdec.c
index 1ea8df0fa0..900895598f 100644
--- a/libavcodec/hevc/hevcdec.c
+++ b/libavcodec/hevc/hevcdec.c
@@ -413,7 +413,7 @@ static int export_stream_params_from_sei(HEVCContext *s)
 }
 
 if ((s->sei.common.film_grain_characteristics && 
s->sei.common.film_grain_characteristics->present) ||
-(s->sei.common.aom_film_grain && s->sei.common.aom_film_grain->enable))
+s->sei.common.aom_film_grain.enable)
 avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN;
 
 return 0;
@@ -3268,7 +3268,7 @@ static int hevc_frame_start(HEVCContext *s, 
HEVCLayerContext *l,
 s->cur_frame->f->flags &= ~AV_FRAME_FLAG_KEY;
 
 s->cur_frame->needs_fg = ((s->sei.common.film_grain_characteristics && 
s->sei.common.film_grain_characteristics->present) ||
-  (s->sei.common.aom_film_grain && 
s->sei.common.aom_film_grain->enable)) &&
+  s->sei.common.aom_film_grain.enable) &&
 !(s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) &&
 !s->avctx->hwaccel;
 
-- 
2.47.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 4/5] avcodec/h264dec: use the RefStruct API for h264db

2024-10-24 Thread James Almer
And ensure the buffer is synced between threads.

Signed-off-by: James Almer 
---
 libavcodec/h264_picture.c | 2 +-
 libavcodec/h264_slice.c   | 2 ++
 libavcodec/h264dec.c  | 2 +-
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/libavcodec/h264_picture.c b/libavcodec/h264_picture.c
index 371a794ec2..d07c3a0af8 100644
--- a/libavcodec/h264_picture.c
+++ b/libavcodec/h264_picture.c
@@ -215,7 +215,7 @@ int ff_h264_field_end(H264Context *h, H264SliceContext *sl, 
int in_setup)
 err = AVERROR_INVALIDDATA;
 if (sd) { // a decoding error may have happened before the side data 
could be allocated
 if (!h->h274db) {
-h->h274db = av_mallocz(sizeof(*h->h274db));
+h->h274db = ff_refstruct_allocz(sizeof(*h->h274db));
 if (!h->h274db)
 return AVERROR(ENOMEM);
 }
diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c
index 84595b1a8b..b88545a075 100644
--- a/libavcodec/h264_slice.c
+++ b/libavcodec/h264_slice.c
@@ -445,6 +445,8 @@ int ff_h264_update_thread_context(AVCodecContext *dst,
 h->sei.common.mastering_display = h1->sei.common.mastering_display;
 h->sei.common.content_light = h1->sei.common.content_light;
 
+ff_refstruct_replace(&h->h274db, h1->h274db);
+
 if (!h->cur_pic_ptr)
 return 0;
 
diff --git a/libavcodec/h264dec.c b/libavcodec/h264dec.c
index af0913ca2c..805732057a 100644
--- a/libavcodec/h264dec.c
+++ b/libavcodec/h264dec.c
@@ -156,7 +156,7 @@ void ff_h264_free_tables(H264Context *h)
 av_freep(&h->mb2b_xy);
 av_freep(&h->mb2br_xy);
 
-av_freep(&h->h274db);
+ff_refstruct_unref(&h->h274db);
 
 ff_refstruct_pool_uninit(&h->qscale_table_pool);
 ff_refstruct_pool_uninit(&h->mb_type_pool);
-- 
2.47.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH 3/5] avcodec/h2645_sei: use the RefStruct API for film_grain_characteristics

2024-10-24 Thread James Almer
And ensure the buffer is synced between threads.

Signed-off-by: James Almer 
---
 libavcodec/h2645_sei.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/libavcodec/h2645_sei.c b/libavcodec/h2645_sei.c
index 33390d389d..9ed2fc5596 100644
--- a/libavcodec/h2645_sei.c
+++ b/libavcodec/h2645_sei.c
@@ -42,6 +42,7 @@
 #include "golomb.h"
 #include "h2645_sei.h"
 #include "itut35.h"
+#include "refstruct.h"
 
 #define IS_H264(codec_id) (CONFIG_H264_SEI && CONFIG_HEVC_SEI ? codec_id == 
AV_CODEC_ID_H264 : CONFIG_H264_SEI)
 #define IS_HEVC(codec_id) (CONFIG_H264_SEI && CONFIG_HEVC_SEI ? codec_id == 
AV_CODEC_ID_HEVC : CONFIG_HEVC_SEI)
@@ -495,11 +496,10 @@ int ff_h2645_sei_message_decode(H2645SEI *h, enum SEIType 
type,
 case SEI_TYPE_DISPLAY_ORIENTATION:
 return decode_display_orientation(&h->display_orientation, gb);
 case SEI_TYPE_FILM_GRAIN_CHARACTERISTICS:
-if (!h->film_grain_characteristics) {
-h->film_grain_characteristics = 
av_mallocz(sizeof(*h->film_grain_characteristics));
-if (!h->film_grain_characteristics)
-return AVERROR(ENOMEM);
-}
+ff_refstruct_unref(&h->film_grain_characteristics);
+h->film_grain_characteristics = 
ff_refstruct_allocz(sizeof(*h->film_grain_characteristics));
+if (!h->film_grain_characteristics)
+return AVERROR(ENOMEM);
 return 
decode_film_grain_characteristics(h->film_grain_characteristics, codec_id, gb);
 case SEI_TYPE_FRAME_PACKING_ARRANGEMENT:
 return decode_frame_packing_arrangement(&h->frame_packing, gb, 
codec_id);
@@ -556,6 +556,9 @@ int ff_h2645_sei_ctx_replace(H2645SEI *dst, const H2645SEI 
*src)
 }
 dst->aom_film_grain.enable = src->aom_film_grain.enable;
 
+ff_refstruct_replace(&dst->film_grain_characteristics,
+  src->film_grain_characteristics);
+
 return 0;
 }
 
@@ -930,5 +933,5 @@ void ff_h2645_sei_reset(H2645SEI *s)
 
 ff_aom_film_grain_uninit_params(&s->aom_film_grain);
 
-av_freep(&s->film_grain_characteristics);
+ff_refstruct_unref(&s->film_grain_characteristics);
 }
-- 
2.47.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH] avcodec/ffv1: Support slice coding mode 1 with golomb rice

2024-10-24 Thread Michael Niedermayer
Sponsored-by: Sovereign Tech Fund
Signed-off-by: Michael Niedermayer 
---
 libavcodec/ffv1dec.c  | 20 ++--
 libavcodec/ffv1dec_template.c |  3 +++
 libavcodec/ffv1enc.c  | 23 ---
 3 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/libavcodec/ffv1dec.c b/libavcodec/ffv1dec.c
index ac8cc8a31dc..de5abfe9b41 100644
--- a/libavcodec/ffv1dec.c
+++ b/libavcodec/ffv1dec.c
@@ -120,9 +120,8 @@ static int is_input_end(RangeCoder *c, GetBitContext *gb, 
int ac)
 static int decode_plane(FFV1Context *f, FFV1SliceContext *sc,
 GetBitContext *gb,
 uint8_t *src, int w, int h, int stride, int 
plane_index,
- int pixel_stride)
+int pixel_stride, int ac)
 {
-const int ac = f->ac;
 int x, y;
 int16_t *sample[2];
 sample[0] = sc->sample_buffer + 3;
@@ -273,6 +272,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
 AVFrame * const p = f->picture.f;
 const int  si = sc - f->slices;
 GetBitContext gb;
+int ac = f->ac || sc->slice_coding_mode == 1;
 
 if (!(p->flags & AV_FRAME_FLAG_KEY) && f->last_picture.f)
 ff_progress_frame_await(&f->last_picture, si);
@@ -305,7 +305,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
 x  = sc->slice_x;
 y  = sc->slice_y;
 
-if (f->ac == AC_GOLOMB_RICE) {
+if (ac == AC_GOLOMB_RICE) {
 if (f->version == 3 && f->micro_version > 1 || f->version > 3)
 get_rac(&sc->c, (uint8_t[]) { 129 });
 sc->ac_byte_count = f->version > 2 || (!x && !y) ? sc->c.bytestream - 
sc->c.bytestream_start - 1 : 0;
@@ -320,17 +320,17 @@ static int decode_slice(AVCodecContext *c, void *arg)
 const int chroma_height = AV_CEIL_RSHIFT(height, f->chroma_v_shift);
 const int cx= x >> f->chroma_h_shift;
 const int cy= y >> f->chroma_v_shift;
-decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], width, 
height, p->linesize[0], 0, 1);
+decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], width, 
height, p->linesize[0], 0, 1, ac);
 
 if (f->chroma_planes) {
-decode_plane(f, sc, &gb, p->data[1] + ps*cx+cy*p->linesize[1], 
chroma_width, chroma_height, p->linesize[1], 1, 1);
-decode_plane(f, sc, &gb, p->data[2] + ps*cx+cy*p->linesize[2], 
chroma_width, chroma_height, p->linesize[2], 1, 1);
+decode_plane(f, sc, &gb, p->data[1] + ps*cx+cy*p->linesize[1], 
chroma_width, chroma_height, p->linesize[1], 1, 1, ac);
+decode_plane(f, sc, &gb, p->data[2] + ps*cx+cy*p->linesize[2], 
chroma_width, chroma_height, p->linesize[2], 1, 1, ac);
 }
 if (f->transparency)
-decode_plane(f, sc, &gb, p->data[3] + ps*x + y*p->linesize[3], 
width, height, p->linesize[3], (f->version >= 4 && !f->chroma_planes) ? 1 : 2, 
1);
+decode_plane(f, sc, &gb, p->data[3] + ps*x + y*p->linesize[3], 
width, height, p->linesize[3], (f->version >= 4 && !f->chroma_planes) ? 1 : 2, 
1, ac);
 } else if (f->colorspace == 0) {
- decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], 
width, height, p->linesize[0], 0, 2);
- decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0] + 1, 
width, height, p->linesize[0], 1, 2);
+ decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0], 
width, height, p->linesize[0], 0, 2, ac);
+ decode_plane(f, sc, &gb, p->data[0] + ps*x + y*p->linesize[0] + 1, 
width, height, p->linesize[0], 1, 2, ac);
 } else if (f->use32bit) {
 uint8_t *planes[4] = { p->data[0] + ps * x + y * p->linesize[0],
p->data[1] + ps * x + y * p->linesize[1],
@@ -344,7 +344,7 @@ static int decode_slice(AVCodecContext *c, void *arg)
p->data[3] + ps * x + y * p->linesize[3] };
 decode_rgb_frame(f, sc, &gb, planes, width, height, p->linesize);
 }
-if (f->ac != AC_GOLOMB_RICE && f->version > 2) {
+if (ac != AC_GOLOMB_RICE && f->version > 2) {
 int v;
 get_rac(&sc->c, (uint8_t[]) { 129 });
 v = sc->c.bytestream_end - sc->c.bytestream - 2 - 5*!!f->ec;
diff --git a/libavcodec/ffv1dec_template.c b/libavcodec/ffv1dec_template.c
index 2da6bd935dc..e983d1ba648 100644
--- a/libavcodec/ffv1dec_template.c
+++ b/libavcodec/ffv1dec_template.c
@@ -143,6 +143,9 @@ static int RENAME(decode_rgb_frame)(FFV1Context *f, 
FFV1SliceContext *sc,
 int transparency = f->transparency;
 int ac = f->ac;
 
+if (sc->slice_coding_mode == 1)
+ac = 1;
+
 for (x = 0; x < 4; x++) {
 sample[x][0] = RENAME(sc->sample_buffer) +  x * 2  * (w + 6) + 3;
 sample[x][1] = RENAME(sc->sample_buffer) + (x * 2 + 1) * (w + 6) + 3;
diff --git a/libavcodec/ffv1enc.c b/libavcodec/ffv1enc.c
index a32059886c0..547fb598d6d 100644
--- a/libavcodec/ffv1enc.c
+++ b/l

Re: [FFmpeg-devel] [PATCH v12 5/9] libavcodec/dnxucdec: DNxUncompressed decoder

2024-10-24 Thread Anton Khirnov
Quoting Marton Balint (2024-10-23 15:19:27)
> 
> 
> On Tue, 22 Oct 2024, Anton Khirnov wrote:
> 
> > Quoting Marton Balint (2024-10-22 20:35:52)
> >>
> >>
> >> On Tue, 22 Oct 2024, Anton Khirnov wrote:
> >>
> >>> Quoting Martin Schitter (2024-10-21 21:57:18)
>  +static int pass_through(AVCodecContext *avctx, AVFrame *frame, const 
>  AVPacket *avpkt)
>  +{
>  +/* there is no need to copy as the data already match
>  + * a known pixel format */
>  +
>  +frame->buf[0] = av_buffer_ref(avpkt->buf);
> >>>
> >>> I said this twice before already - every single format that uses
> >>> pass_through() should instead be exported by the demuxer as
> >>> AV_CODEC_ID_RAWVIDEO, because that's what it is.
> >>
> >> I don't really want the MXF demuxer/muxer to do DNXUC parsing
> >
> > What parsing is there to do? You just compare against the codec tag.
> 
> As far as I know, the codec tag is embedded somewhere inside the various 
> boxes of DNXUC (pack box, optional icmp box, optional fill box, sinf box). 
> So I don't quite see how can you easily get that (or find where the signal 
> data actually starts) without parsing the box sequence.

The decoder in this patch does not see any of it, it only gets raw video
and the codec tag.

Looking at this a bit closer, seems Martin's intent is to do it in the
parser, which is wrong as parser are not supposed to perform
transformations like this.
The relevant code is on the order of 10-20 lines, so I don't really see
a problem with moving that into the demuxer. Either way it should not be
in a parser.

> >> Also I might want to wrap DNXUC essence to another container, or remux
> >> it to MXF again.
> >
> > And where is the problem here?
> 
> If demuxing destroys the original DNXUC essence with its box structure, 
> then you will need something on the muxing side that re-creates this. If a 
> DNXUC packet contained the whole 'pack' box, this would not be needed. 

And what is the problem with that?

This reminds me a bit of the s302m question that was resolved by TC
earlier this year [1]. The point there was that demuxers should try to
export data in the most generally useful format and not just blindly
pass through whatever is stored in the container. Is DNXUC useful
outside of MXF? What do our callers gain by getting the box structure
rather than raw video+codec tag? If it's only useful for remuxing then
it should be handled inside lavf.

> This is also why I don't quite like that the dnxuc_parser right now skips 
> some header bytes and points the packet buffer to the signal data. IMHO 
> the DNXUC packet should really be the 'pack' box itself, and the decoder 
> should parse that.

I agree that the parser is definitely not the place to do this.

> >> So I am not convinced that the current approach is bad.
> >
> > It is bad because it introduces a completely pointless and arbitrary
> > distinction between "rawvideo" and "rawvideo, but EXTRACTED FROM MXF".
> 
> DNXUC is not only raw data, but the whole box structure with raw or 
> compressed data inside.
> The format even allows to store the planes separately.

I am not saying the codec ID should not exist and EVERYTHING should be
handled by the demuxer, just that it should be done when the data
directly maps to our notion of raw video. Which also neatly separates
the cases where the decoder actually does nontrivial work rather than
just set up some pointers, which have radically different performance
characteristics.

[1] https://lists.ffmpeg.org//pipermail/ffmpeg-devel/2024-April/325588.html

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] profile vs vprofile in 7.1?

2024-10-24 Thread Anton Khirnov
Quoting Anton Khirnov (2024-10-23 09:48:57)
> Hard to tell what the problem is

Just occurred to me you might be hitting an actual bug, reported in [1]
and fixed in 9ce63e65d65b303813d4ae677228226d7cd232b9 in master, and
020d9f2b4886aa620252da4db7a4936378d6eb3a in the 7.1 branch (not yet in a
point release, but should be soon).

[1] https://lists.ffmpeg.org//pipermail/ffmpeg-devel/2024-October/334786.html

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 11/18] swscale: expose SwsContext publicly

2024-10-24 Thread Niklas Haas
On Thu, 24 Oct 2024 11:14:32 +0200 Anton Khirnov  wrote:
> Quoting Niklas Haas (2024-10-20 22:05:20)
> > +/**
> > + * Main external API structure. New fields can be added to the end with
> > + * minor version bumps. Removal, reordering and changes to existing fields
> > + * require a major version bump. sizeof(SwsContext) is not part of the ABI.
> > + */
> > +typedef struct SwsContext {
> > +const AVClass *av_class;
> > +
> > +/**
> > + * Private context used for internal data.
> > + */
> > +struct SwsInternal *internal;
>
> Why is this visible in the public context?

I was following the precedent established by AVCodecContext.

If you prefer, I could instead allocate SwsInternal as a superset of
SwsContext. That would actually make a lot of the code changes a bit cleaner
to implement, and avoids some pointer dereferencing.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 14/18] swscale/graph: add new high-level scaler dispatch mechanism

2024-10-24 Thread Anton Khirnov
Quoting Niklas Haas (2024-10-20 22:05:23)
> From: Niklas Haas 
> 
> This interface has been designed from the ground up to serve as a new
> framework for dispatching various scaling operations at a high level. This
> will eventually replace the old ad-hoc system of using cascaded contexts,
> as well as allowing us to plug in more dynamic scaling passes requiring
> intermediate steps, such as colorspace conversions, etc.
> 
> The starter implementation merely piggybacks off the existing sws_init() and
> sws_scale(), functions, though it does bring the immediate improvement of
> splitting up cascaded functions and pre/post conversion functions into
> separate filter passes, which allows them to e.g. be executed in parallel
> even when the main scaler is required to be single threaded. Additionally,
> a dedicated (multi-threaded) noop memcpy pass substantially improves
> throughput of that fast path.
> 
> Follow-up commits will eventually expand this to move all of the scaling
> decision logic into the graph init function, and also eliminate some of the
> current special cases.

Does this (or can it) support copy-free passthrough of individual
planes, for cases like YUV420P<->NV12?

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 14/18] swscale/graph: add new high-level scaler dispatch mechanism

2024-10-24 Thread Niklas Haas
On Thu, 24 Oct 2024 11:30:12 +0200 Anton Khirnov  wrote:
> Quoting Niklas Haas (2024-10-20 22:05:23)
> > From: Niklas Haas 
> >
> > This interface has been designed from the ground up to serve as a new
> > framework for dispatching various scaling operations at a high level. This
> > will eventually replace the old ad-hoc system of using cascaded contexts,
> > as well as allowing us to plug in more dynamic scaling passes requiring
> > intermediate steps, such as colorspace conversions, etc.
> >
> > The starter implementation merely piggybacks off the existing sws_init() and
> > sws_scale(), functions, though it does bring the immediate improvement of
> > splitting up cascaded functions and pre/post conversion functions into
> > separate filter passes, which allows them to e.g. be executed in parallel
> > even when the main scaler is required to be single threaded. Additionally,
> > a dedicated (multi-threaded) noop memcpy pass substantially improves
> > throughput of that fast path.
> >
> > Follow-up commits will eventually expand this to move all of the scaling
> > decision logic into the graph init function, and also eliminate some of the
> > current special cases.
>
> Does this (or can it) support copy-free passthrough of individual
> planes, for cases like YUV420P<->NV12?

Not currently, no. We could switch to AVBufferRefs for the plane pointers to
add this functionality down the line, but it's not a high priority because
doing this will require the much harder problem of rewriting the underlying
scaler dispatch logic to begin with.

Doing this would not be terribly difficult either way, but the problem is that
swscale currently does not exactly have a good concept of what's happening
to each plane - it's all a jumble of ad-hoc cases.

One of my plans for SwsGraph is to first make a list of operations to perform
on each plane, and then eliminate reduntant passes to figure out what special
cases and/or noop passes can be optimized. But this has to wait a bit, as I'm
first working on the immediate goal of adding support for more complex
colorspaces (by chaining together multiple scaling passes).
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 11/18] swscale: expose SwsContext publicly

2024-10-24 Thread Anton Khirnov
Quoting Niklas Haas (2024-10-24 11:28:16)
> On Thu, 24 Oct 2024 11:14:32 +0200 Anton Khirnov  wrote:
> > Quoting Niklas Haas (2024-10-20 22:05:20)
> > > +/**
> > > + * Main external API structure. New fields can be added to the end with
> > > + * minor version bumps. Removal, reordering and changes to existing 
> > > fields
> > > + * require a major version bump. sizeof(SwsContext) is not part of the 
> > > ABI.
> > > + */
> > > +typedef struct SwsContext {
> > > +const AVClass *av_class;
> > > +
> > > +/**
> > > + * Private context used for internal data.
> > > + */
> > > +struct SwsInternal *internal;
> >
> > Why is this visible in the public context?
> 
> I was following the precedent established by AVCodecContext.

That's historical and we are gradually moving away from it, especially
in new code. See e.g. fed02825081bd6441f865c9cfcf50e384b2392f5.

> If you prefer, I could instead allocate SwsInternal as a superset of
> SwsContext. That would actually make a lot of the code changes a bit cleaner
> to implement, and avoids some pointer dereferencing.

I think that would be preferable.

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 11/18] swscale: expose SwsContext publicly

2024-10-24 Thread Anton Khirnov
Quoting Niklas Haas (2024-10-20 22:05:20)
> +/**
> + * Main external API structure. New fields can be added to the end with
> + * minor version bumps. Removal, reordering and changes to existing fields
> + * require a major version bump. sizeof(SwsContext) is not part of the ABI.
> + */
> +typedef struct SwsContext {
> +const AVClass *av_class;
> +
> +/**
> + * Private context used for internal data.
> + */
> +struct SwsInternal *internal;

Why is this visible in the public context?

-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] doc/ffmpeg: improve -disposition, -stats, and -progress documentation

2024-10-24 Thread Anton Khirnov
Pushed.

Thanks,
-- 
Anton Khirnov
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".