Re: [FFmpeg-devel] [PATCH 1/3] lavc/vp8dsp: R-V V put_bilin_h
Okay, Thanks for clarifying. I have used many fractional multipliers, mostly not for correctness, but often for performance improvements (though I don't know why), and there are no obvious downsides, How about leaving this code? Rémi Denis-Courmont 于2024年2月24日周六 15:39写道: > Hi, > > Le 24 février 2024 03:07:36 GMT+02:00, flow gg a > écrit : > > .ifc \len,4 > >-vsetivlizero, 5, e8, mf2, ta, ma > >+vsetivlizero, 5, e8, m1, ta, ma > > .elseif \len == 8 > > vsetivlizero, 9, e8, m1, ta, ma > > .else > >@@ -112,9 +112,9 @@ endfunc > > vslide1down.vx v2, \dst, t5 > > > > .ifc \len,4 > >-vsetivlizero, 4, e8, mf4, ta, ma > >+vsetivlizero, 4, e8, m1, ta, ma > > .elseif \len == 8 > >-vsetivlizero, 8, e8, mf2, ta, ma > >+vsetivlizero, 8, e8, m1, ta, ma > > > >What are the benefits of not using fractional multipliers here? > > Insofar as E8/MF4 is guaranteed to work for Zve32x, there are no benefits > per se. > > However fractional multipliers were added to the specification to enable > addressing invididual vectors whilst the effective multiplier is larger > than one. This can only happen with mixed widths. Fractions were not > intended to make vector shorter - there is the vector length for that > already. > > That's why "E64/MF2" doesn't work, even though it's the same vector bit > size as "E8/MF2". > > > Making this > >change would result in a 10%-20% slowdown. > > That's kind of odd. This may be caused by the slides, but it's strange to > go out of the way for hardware to optimise a case that's not even intended. > > > mf2/4 m1 > >vp8_put_bilin4_h_rvv_i32: 158.7 193.7 > >vp8_put_bilin4_hv_rvv_i32: 255.7 302.7 > >vp8_put_bilin8_h_rvv_i32: 318.7 358.7 > >vp8_put_bilin8_hv_rvv_i32: 528.7 569.7 > > > >Rémi Denis-Courmont 于2024年2月24日周六 01:18写道: > > > >> Hi, > >> > >> + > >> +.macro bilin_h_load dst len > >> +.ifc \len,4 > >> +vsetivlizero, 5, e8, mf2, ta, ma > >> > >> Don't use fractional multipliers if you don't mix element widths. > >> > >> +.elseif \len == 8 > >> +vsetivlizero, 9, e8, m1, ta, ma > >> +.else > >> +vsetivlizero, 17, e8, m2, ta, ma > >> +.endif > >> + > >> +vle8.v \dst, (a2) > >> +vslide1down.vx v2, \dst, t5 > >> + > >> > >> +.ifc \len,4 > >> +vsetivlizero, 4, e8, mf4, ta, ma > >> > >> Same as above. > >> > >> +.elseif \len == 8 > >> +vsetivlizero, 8, e8, mf2, ta, ma > >> > >> Also. > >> > >> +.else > >> +vsetivlizero, 16, e8, m1, ta, ma > >> +.endif > >> > >> +vwmulu.vx v28, \dst, t1 > >> +vwmaccu.vx v28, a5, v2 > >> +vwaddu.wx v24, v28, t4 > >> +vnsra.wi\dst, v24, 3 > >> +.endm > >> + > >> +.macro put_vp8_bilin_h len > >> +li t1, 8 > >> +li t4, 4 > >> +li t5, 1 > >> +sub t1, t1, a5 > >> +1: > >> +addia4, a4, -1 > >> +bilin_h_loadv0, \len > >> +vse8.v v0, (a0) > >> +add a2, a2, a3 > >> +add a0, a0, a1 > >> +bneza4, 1b > >> + > >> +ret > >> +.endm > >> + > >> +func ff_put_vp8_bilin16_h_rvv, zve32x > >> +put_vp8_bilin_h 16 > >> +endfunc > >> + > >> +func ff_put_vp8_bilin8_h_rvv, zve32x > >> +put_vp8_bilin_h 8 > >> +endfunc > >> + > >> +func ff_put_vp8_bilin4_h_rvv, zve32x > >> +put_vp8_bilin_h 4 > >> +endfunc > >> > >> -- > >> レミ・デニ-クールモン > >> http://www.remlab.net/ > >> > >> > >> > >> ___ > >> ffmpeg-devel mailing list > >> ffmpeg-devel@ffmpeg.org > >> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > >> > >> To unsubscribe, visit link above, or email > >> ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > >> > >___ > >ffmpeg-devel mailing list > >ffmpeg-devel@ffmpeg.org > >https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > >To unsubscribe, visit link above, or email > >ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/3] avcodec/x86: disable hevc 12b luma deblock
Nuo Mi writes: > On Wed, Feb 21, 2024 at 7:10 PM J. Dekker wrote: > >> Over/underflow in some cases. >> >> Signed-off-by: J. Dekker >> --- >> libavcodec/x86/hevcdsp_init.c | 9 + >> 1 file changed, 5 insertions(+), 4 deletions(-) >> >> diff --git a/libavcodec/x86/hevcdsp_init.c b/libavcodec/x86/hevcdsp_init.c >> index 31e81eb11f..11cb1b3bfd 100644 >> --- a/libavcodec/x86/hevcdsp_init.c >> +++ b/libavcodec/x86/hevcdsp_init.c >> @@ -1205,10 +1205,11 @@ void ff_hevc_dsp_init_x86(HEVCDSPContext *c, const >> int bit_depth) >> if (EXTERNAL_SSE2(cpu_flags)) { >> c->hevc_v_loop_filter_chroma = >> ff_hevc_v_loop_filter_chroma_12_sse2; >> c->hevc_h_loop_filter_chroma = >> ff_hevc_h_loop_filter_chroma_12_sse2; >> -if (ARCH_X86_64) { >> -c->hevc_v_loop_filter_luma = >> ff_hevc_v_loop_filter_luma_12_sse2; >> -c->hevc_h_loop_filter_luma = >> ff_hevc_h_loop_filter_luma_12_sse2; >> -} >> +// FIXME: 12-bit luma deblock over/underflows in some cases >> +// if (ARCH_X86_64) { >> +// c->hevc_v_loop_filter_luma = >> ff_hevc_v_loop_filter_luma_12_sse2; >> +// c->hevc_h_loop_filter_luma = >> ff_hevc_h_loop_filter_luma_12_sse2; >> +// } >> SAO_BAND_INIT(12, sse2); >> SAO_EDGE_INIT(12, sse2); >> > Hi Dekker, > VVC will utilize this function as well. > Could you please share the HEVC clip or data that caused the overflow? > We'll make efforts to address it during the VVC porting > You can just run ./tests/checkasm/checkasm --test=hevc_deblock to find a failing case. My guess is that delta0 overflows before the right shift, see the ARM64 asm which specfically widens this calculation on 12 bit variant but I'm not 100%, I don't know x86 asm. -- jd ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/3] avcodec/x86: disable hevc 12b luma deblock
Andreas Rheinhardt writes: > J. Dekker: >> SAO_BAND_INIT(12, sse2); >> SAO_EDGE_INIT(12, sse2); >> > > If you disable them here, you should also ensure that they are not > assembled at all. > > - Andreas Sure, will do on push if no other things to resolve in the latest set. -- jd ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] fftools/ffmpeg: remove options deprecated before 6.0
Quoting Andreas Rheinhardt (2024-02-21 17:59:16) > Anton Khirnov: > > --- > > diff --git a/tests/fate/ffmpeg.mak b/tests/fate/ffmpeg.mak > > index 3f21815ba2..669c878c7f 100644 > > --- a/tests/fate/ffmpeg.mak > > +++ b/tests/fate/ffmpeg.mak > > @@ -1,28 +1,3 @@ > > -FATE_MAPCHAN-$(call FILTERDEMDECENCMUX, PAN, WAV, PCM_S16LE, PCM_S16LE, > > WAV, MD5_PROTOCOL) += fate-mapchan-6ch-extract-2 > > -fate-mapchan-6ch-extract-2: tests/data/asynth-22050-6.wav > > -fate-mapchan-6ch-extract-2: CMD = ffmpeg -i > > $(TARGET_PATH)/tests/data/asynth-22050-6.wav -map_channel 0.0.0 -fflags > > +bitexact -f wav md5: -map_channel 0.0.1 -fflags +bitexact -f wav md5: > > - > > -FATE_MAPCHAN-$(call FILTERDEMDECENCMUX, PAN ARESAMPLE, WAV, PCM_S16LE, > > PCM_S16LE, WAV) += fate-mapchan-6ch-extract-2-downmix-mono > > -fate-mapchan-6ch-extract-2-downmix-mono: tests/data/asynth-22050-6.wav > > -fate-mapchan-6ch-extract-2-downmix-mono: CMD = md5 > > -auto_conversion_filters -i $(TARGET_PATH)/tests/data/asynth-22050-6.wav > > -map_channel 0.0.1 -map_channel 0.0.0 -ac 1 -fflags +bitexact -f wav > > - > > -FATE_MAPCHAN-$(call FILTERDEMDECENCMUX, PAN, WAV, PCM_S16LE, PCM_S16LE, > > WAV) += fate-mapchan-silent-mono > > -fate-mapchan-silent-mono: tests/data/asynth-22050-1.wav > > -fate-mapchan-silent-mono: CMD = md5 -i > > $(TARGET_PATH)/tests/data/asynth-22050-1.wav -map_channel -1 -map_channel > > 0.0.0 -fflags +bitexact -f wav > > - > > -FATE_MAPCHAN-$(call FILTERDEMDECENCMUX, PAN, WAV, PCM_S16LE, PCM_S16LE, > > WAV) += fate-mapchan-2ch-extract-ch0-ch2-trailing > > -fate-mapchan-2ch-extract-ch0-ch2-trailing: tests/data/asynth-44100-2.wav > > -fate-mapchan-2ch-extract-ch0-ch2-trailing: CMD = md5 -i > > $(TARGET_PATH)/tests/data/asynth-44100-2.wav -map_channel 0.0.0 > > -map_channel 0.0.2? -fflags +bitexact -f wav > > - > > -FATE_MAPCHAN-$(call FILTERDEMDECENCMUX, PAN, WAV, PCM_S16LE, PCM_S16LE, > > WAV) += fate-mapchan-3ch-extract-ch0-ch2-trailing > > -fate-mapchan-3ch-extract-ch0-ch2-trailing: tests/data/asynth-44100-3.wav > > -fate-mapchan-3ch-extract-ch0-ch2-trailing: CMD = md5 -i > > $(TARGET_PATH)/tests/data/asynth-44100-3.wav -map_channel 0.0.0 > > -map_channel 0.0.2? -fflags +bitexact -f wav > > - > > -FATE_MAPCHAN = $(FATE_MAPCHAN-yes) > > - > > -FATE_FFMPEG += $(FATE_MAPCHAN) > > -fate-mapchan: $(FATE_MAPCHAN) > > - > > FATE_FFMPEG-$(call FILTERFRAMECRC, COLOR) += fate-ffmpeg-filter_complex > > fate-ffmpeg-filter_complex: CMD = framecrc -filter_complex color=d=1:r=5 > > -fflags +bitexact > > > > Will this reduce coverage in the pan filter? According to gcov, no. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/3] avcodec/x86: disable hevc 12b luma deblock
On Sat, 24 Feb 2024, J. Dekker wrote: Nuo Mi writes: On Wed, Feb 21, 2024 at 7:10 PM J. Dekker wrote: Over/underflow in some cases. Signed-off-by: J. Dekker --- libavcodec/x86/hevcdsp_init.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/libavcodec/x86/hevcdsp_init.c b/libavcodec/x86/hevcdsp_init.c index 31e81eb11f..11cb1b3bfd 100644 --- a/libavcodec/x86/hevcdsp_init.c +++ b/libavcodec/x86/hevcdsp_init.c @@ -1205,10 +1205,11 @@ void ff_hevc_dsp_init_x86(HEVCDSPContext *c, const int bit_depth) if (EXTERNAL_SSE2(cpu_flags)) { c->hevc_v_loop_filter_chroma = ff_hevc_v_loop_filter_chroma_12_sse2; c->hevc_h_loop_filter_chroma = ff_hevc_h_loop_filter_chroma_12_sse2; -if (ARCH_X86_64) { -c->hevc_v_loop_filter_luma = ff_hevc_v_loop_filter_luma_12_sse2; -c->hevc_h_loop_filter_luma = ff_hevc_h_loop_filter_luma_12_sse2; -} +// FIXME: 12-bit luma deblock over/underflows in some cases +// if (ARCH_X86_64) { +// c->hevc_v_loop_filter_luma = ff_hevc_v_loop_filter_luma_12_sse2; +// c->hevc_h_loop_filter_luma = ff_hevc_h_loop_filter_luma_12_sse2; +// } SAO_BAND_INIT(12, sse2); SAO_EDGE_INIT(12, sse2); Hi Dekker, VVC will utilize this function as well. Could you please share the HEVC clip or data that caused the overflow? We'll make efforts to address it during the VVC porting You can just run ./tests/checkasm/checkasm --test=hevc_deblock to find a failing case. To clarify, this is with the new checkasm test added in this patchset, not currently in git master - otherwise fate would be failing for everybody on x86. My guess is that delta0 overflows before the right shift, see the ARM64 asm which specfically widens this calculation on 12 bit variant but I'm not 100%, I don't know x86 asm. Are you sure the input is within valid range? It's always possible that checkasm produces inputs that the real decoder wouldn't - but it's also possible that this is a real decoder bug that just hasn't been triggered by any other test yet. // Martin ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [RFC] clarifying the TC conflict of interest rule
Send warmly welcomes to old/new tyrants of FFmpeg. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] Add protocol for Android content providers
On Thu, Feb 15, 2024 at 10:13:03AM +0100, Matthieu Bouron wrote: > Le jeu. 15 févr. 2024, 9:46 AM, Zhao Zhili a > écrit : > > > > > > 在 2024年2月15日,下午3:57,Matthieu Bouron 写道: > > > > > > On Thu, Feb 15, 2024 at 12:13:59PM +0800, Zhao Zhili wrote: > > >> > > >> > > On Feb 14, 2024, at 06:50, Matthieu Bouron > > wrote: > > >>> > > >>> Hi, > > >>> > > >>> On Android, content providers are used for accessing files through > > shared > > >>> mechanisms. One typical case would be an app willing to open a video > > from > > >>> Google Photos, gallery apps, TikTok, Instagram or some other providers. > > >>> A content URI looks something like "content://authority/path/id", see: > > >>> https://developer.android.com/reference/android/content/ContentUris > > >>> > > https://developer.android.com/guide/topics/providers/content-provider-basics > > >>> > > >>> It can currently be somehow managed through clumsy means such as using > > a "fd:" > > >>> filename and crafting a special AVOption, which also has the drawback > > of > > >>> requiring the third party to carry around opened file descriptors > > (with the > > >>> multiple opened file limitations implied). Custom AVIOContexts are > > also an > > >> > > >> File descriptor is a general abstraction layer, it target more > > platforms than > > >> Android specific content provider. Android provided getFd() API since > > API > > >> level 12, I guess that’s the default method to deal with content > > provider in > > >> native code. It’s a few lines of code to get native fd in Java, but > > dozens of code > > >> in C with JNI, which is what this patchset done. > > >> > > >> For multiple opened file limitations issue, they can close the file > > descriptor after > > >> open. It’s unlikely to reach the limit in normal case without leak. > > >> > > >> I’m OK to provide this android_content_protocol helper if user requests. > > > > > > I've been doing this kind of work for 3/4 users (including myself) at > > this > > > point and have to do it another time, this is what motivated me to > > propose > > > this patchset. > > > > > >> > > >>> option. Both options will have to deal with the JNI though and end > > users will > > >>> have to re-implement the same exact thing. > > >> > > >> User still need to deal with JNI with the new android_content_protocol, > > more or > > >> less, it’s unavoidable. > > > > > > The advantage I see of using this protocol is that the user only need to > > > call av_jni_set_jvm() + av_jni_set_android_app_ctx() at the start of the > > > application and FFmpeg will handle the content-uri transparently. This is > > > especially helpful if the Android application rely on multiple libraries > > > that in turn rely on FFmpeg to read medias. > > > > The url still need to be passed from Java to C via JNI, it’s not much > > different compared to pass fd. > > > > It's not that much different I agree. But let's say you have a rendering > engine (in C) where you need to pass hundreds of media (from the user) to > render a scene, each media is used at different time during the rendering. > And Ffmpeg is not a direct dependency and can be called from different > libraries/places used by the rendering engine. Calling > av_jni_set_android_app_ctx() and you're done, you can pass the content URI > to the engine (passing fd at this stage is not an option imho). You still > need to convert the uri from java string to c before calling the c code, > but it's a direct translation which is typically part of a binding. > > > > > > > > >> > > >>> > > >>> This patchset addresses this by adding a content provider protocol, > > which has > > >>> an API fairly similar to fopen. Android 11 appears to provide something > > >>> transparent within fopen(), but FFmpeg doesn't use it in the file > > protocol, and > > >>> Android < 11 are still widely used. > > >>> > > >>> The first part move the JNI infrastructure from avcodec to avutil (it > > remains > > >>> internally shared, there is little user implication), > > >> > > >> OK. JNI infrastructure should belong to avutil at the first place, so > > hwcontext_mediacodec > > >> and so on can use it. Unfortunately for those new avpriv_. > > > > > > What do you mean by "Unfortunately" ? Would you like to make the JNI API > > > public ? > > > > I think it’s our target to reduce the number of avpriv API, not increase > > it. Does duplicate the compile unit work in this case so we don’t need to > > export the symbols? > > > > Directly including ffjni.c from libavformat/file.c works. We still need to > pass the application context though (could be added to avcodec/jni.h) So what would be the preferred way forward ? including libavformat/file.c or migrating the code to avutil (avpriv_*) ? [...] ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "u
Re: [FFmpeg-devel] [PATCH] avformat/mxfenc: add h264_mp4toannexb bitstream filter if needed when muxing h264
> > > +static int mxf_check_bitstream(AVFormatContext *s, AVStream *st, > > > const AVPacket *pkt) > > > +{ > > > + if (st->codecpar->codec_id == AV_CODEC_ID_H264) { > > > + if (pkt->size >= 5 && AV_RB32(pkt->data) != 0x001 && > > > + AV_RB24(pkt->data) != 0x01) > > > + return ff_stream_add_bitstream_filter(st, > > > "h264_mp4toannexb", NULL); Regardless of the comments below, this is wrong. ST 381-3 says this: > The byte stream format can be constructed from the NAL unit stream by > prefixing each NAL unit with a start > code prefix and zero or more zero-valued bytes to form a stream of > bytes. Note the wording is "zero or more", not "zero or one". The correct way to do this is to inspect byte 14 of the EC UL, per section 8.1 of ST 381-3. > > > I sent the very same patch long ago [1]. Tomas Härdin opposed it > > [2], > > [3], because he sees stuff like this as hack. No, I oppose it because it is potentially against spec. The MXF ecosystem is bad enough as it is without us encouraging out-of-spec behavior. Any behavior we put in to handle out-of-spec behavior should be limited by Identification. But even that would be making our responsibility what is really the responsibility of companies making broken MXF muxers. /Tomas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] avcodec/cbs_h2645: Avoid function pointer casts, fix UB
On 21/02/2024 23:32, Andreas Rheinhardt wrote: The SEI message read/write functions are called via function pointers where the SEI message-specific context is passed as void*. But the actual function definitions use a pointer to their proper context in place of void*, making the calls undefined behaviour. Clang UBSan 17 warns about this. This commit fixes this by making the functions match the type of the call. This reduced the number of failing FATE tests with UBSan from 164 to 85 here. Signed-off-by: Andreas Rheinhardt --- libavcodec/cbs_h264_syntax_template.c | 24 ++--- libavcodec/cbs_h265_syntax_template.c | 31 +-- libavcodec/cbs_h266_syntax_template.c | 6 +++--- libavcodec/cbs_sei.h | 8 +++ libavcodec/cbs_sei_syntax_template.c | 23 5 files changed, 52 insertions(+), 40 deletions(-) diff --git a/libavcodec/cbs_h264_syntax_template.c b/libavcodec/cbs_h264_syntax_template.c index 0f8bba4a0d..282cd24292 100644 --- a/libavcodec/cbs_h264_syntax_template.c +++ b/libavcodec/cbs_h264_syntax_template.c @@ -511,9 +511,9 @@ static int FUNC(pps)(CodedBitstreamContext *ctx, RWContext *rw, } static int FUNC(sei_buffering_period)(CodedBitstreamContext *ctx, RWContext *rw, - H264RawSEIBufferingPeriod *current, - SEIMessageState *sei) + void *current_, SEIMessageState *sei) { +H264RawSEIBufferingPeriod *current = current_; CodedBitstreamH264Context *h264 = ctx->priv_data; const H264RawSPS *sps; int err, i, length; @@ -605,9 +605,9 @@ static int FUNC(sei_pic_timestamp)(CodedBitstreamContext *ctx, RWContext *rw, } static int FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawSEIPicTiming *current, -SEIMessageState *sei) +void *current_, SEIMessageState *sei) { +H264RawSEIPicTiming *current = current_; CodedBitstreamH264Context *h264 = ctx->priv_data; const H264RawSPS *sps; int err; @@ -677,9 +677,9 @@ static int FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, RWContext *rw, } static int FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, RWContext *rw, - H264RawSEIPanScanRect *current, - SEIMessageState *sei) + void *current_, SEIMessageState *sei) { +H264RawSEIPanScanRect *current = current_; int err, i; HEADER("Pan-Scan Rectangle"); @@ -704,9 +704,9 @@ static int FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, RWContext *rw, } static int FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawSEIRecoveryPoint *current, -SEIMessageState *sei) +void *current_, SEIMessageState *sei) { +H264RawSEIRecoveryPoint *current = current_; int err; HEADER("Recovery Point"); @@ -720,9 +720,9 @@ static int FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, RWContext *rw, } static int FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawFilmGrainCharacteristics *current, -SEIMessageState *state) +void *current_, SEIMessageState *state) { +H264RawFilmGrainCharacteristics *current = current_; CodedBitstreamH264Context *h264 = ctx->priv_data; const H264RawSPS *sps; int err, c, i, j; @@ -803,9 +803,9 @@ static int FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContex } static int FUNC(sei_display_orientation)(CodedBitstreamContext *ctx, RWContext *rw, - H264RawSEIDisplayOrientation *current, - SEIMessageState *sei) + void *current_, SEIMessageState *sei) { +H264RawSEIDisplayOrientation *current = current_; int err; HEADER("Display Orientation"); diff --git a/libavcodec/cbs_h265_syntax_template.c b/libavcodec/cbs_h265_syntax_template.c index 2d4b954718..53ae0cabff 100644 --- a/libavcodec/cbs_h265_syntax_template.c +++ b/libavcodec/cbs_h265_syntax_template.c @@ -1620,8 +1620,9 @@ static int FUNC(slice_segment_header)(CodedBitstreamContext *ctx, RWContext *rw, static int FUNC(sei_buffering_period) (CodedBitstreamContext *ctx, RWContext *rw, - H265RawSEIBufferingPeriod *current, SEIMessageState *sei) + void *current_, SEIMessageState *sei) { +H265RawSEIBufferingPeriod *current = current_; CodedBitstreamH265Context *h265 = ctx->priv_data; const H265RawSPS *sps;
Re: [FFmpeg-devel] [PATCH 2/3] avutil/opt: Use correct function pointer type
On 21/02/2024 23:32, Andreas Rheinhardt wrote: av_get_sample/pix_fmt() return their respective enums and are therefore not of the type int (*)(const char*), yet they are called as-if they were of this type. This works in practice, but is actually undefined behaviour. With Clang 17 UBSan these violations are flagged, affecting lots of tests. The number of failing tests went down from 3363 to 164 here with this patch. Signed-off-by: Andreas Rheinhardt --- libavutil/opt.c | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/libavutil/opt.c b/libavutil/opt.c index d13b1ab504..0681b19896 100644 --- a/libavutil/opt.c +++ b/libavutil/opt.c @@ -444,16 +444,26 @@ static int set_string_fmt(void *obj, const AVOption *o, const char *val, uint8_t return 0; } +static int get_pix_fmt(const char *name) +{ +return av_get_pix_fmt(name); +} + static int set_string_pixel_fmt(void *obj, const AVOption *o, const char *val, uint8_t *dst) { return set_string_fmt(obj, o, val, dst, - AV_PIX_FMT_NB, av_get_pix_fmt, "pixel format"); + AV_PIX_FMT_NB, get_pix_fmt, "pixel format"); +} + +static int get_sample_fmt(const char *name) +{ +return av_get_sample_fmt(name); } static int set_string_sample_fmt(void *obj, const AVOption *o, const char *val, uint8_t *dst) { return set_string_fmt(obj, o, val, dst, - AV_SAMPLE_FMT_NB, av_get_sample_fmt, "sample format"); + AV_SAMPLE_FMT_NB, get_sample_fmt, "sample format"); } static int set_string_dict(void *obj, const AVOption *o, const char *val, uint8_t **dst) 1 and 2 of this set LGTM. Thanks, - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket
ons 2024-02-21 klockan 15:27 +0100 skrev Jerome Martinez: > On 21/02/2024 14:11, Tomas Härdin wrote: > > > mxfdec can detect cases where there will be two separate fields in > > one > > KLV, > > In practice the issue is not to detect I2 case in mxfdec (it saves us > only a little during probing of the first frame, but I could add such > signaling in a patch after this one because it is actually > independent > from the management of 2 fields in the decoder if we want to support > buggy files), the issue is to split per field in mxfdec. > > SMPTE ST 422 extracts: > "Users are cautioned that the code values for SOC and EOC are not > protected and can occur within the image size marker segment (SIZ), > quantization marker segment (QCD), comment marker segment (COM) and > other places." > "Decoders can derive the bytestream offsets of each field by > analysing > the code stream format within the essence element as described in > ISO/IEC 15444-1." > > Note that this MXF + jp2k spec hints that separating fields should be > done in the decoder, not the demuxer. We already have a j2k parser that could be pressed into service for this. But perhaps parsing is not necessary, see below. My main concern is that interlacing work the same for all codecs that mxfdec supports, expecially rawvideo. > It is impossible to split per field in a codec-neutral manner due to > lack of metadata for that in MXF, and doing that in mxfdec implies to > duplicate jp2k header parser code in mxfdec, and to add a similar > parser > per supported video format in the future. We do have a bit of duplicate parsing in mxfdec (and mxfenc) already, which I don't like. So I share your concern. > > and the decoder(s) can if I'm not mistaken be instructed to decode > > into an AVFrame with stride and offset set up for interlaced > > decoding. > > > I checked the MPEG-2 Video decoder and it does not do what you say I didn't say that it does, I said that we could do that. If we conform all wrappings to MIXED_FIELDS this is probably fine. I think it's been well established that FFmpeg is MIXED_FIELDS internally. > , it does what I do with this patch: > - mpegvideo_parser (so the parser of raw files, equivalent to mxfdec) > understands that the stream has 2 separate fields and puts them in 1 > single AVPacket. It could separate them put it doesn't. mpeg2 has a concept of fields that j2k doesn't. Again, we're not really talking about j2k here but MXF > > It should be possible to have ffmpeg set up the necessary plumbing > > for > > this. > > But is it how it works elsewhere in FFmpeg? Would such complex and > deep > modifications be accepted by others? Good question. I would propose something like the following: 1) detect the use of SEPARATE_FIELDS and set a flag in AVStream 2) allocate AVFrame for the size of the resulting *frame* 3a) if the codec is inherently interlaced, call the decoder once 3b) if the codec is not inherently interlaced, call the decoder twice, with appropriate stride, and keep track of the number of bytes decoded so far so we know what offset to start the second decode from The codecs for which 3b) applies include at least: * jpeg2000 * ffv1 * rawvideo * tiff /Tomas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 11/11] avcodec/vvcdec: add Intra Block Copy decoder
On Fri, Feb 23, 2024 at 9:03 PM Nuo Mi wrote: > > > On Thu, Feb 22, 2024 at 3:15 PM Nuo Mi wrote: > >> From: Wu Jianhua >> >> Introduction at https://ieeexplore.ieee.org/document/9408666 >> >> passed files: >> 10b444_A_Kwai_3.bit >> 10b444_B_Kwai_3.bit >> CodingToolsSets_D_Tencent_2.bit >> IBC_A_Tencent_2.bit >> IBC_B_Tencent_2.bit >> IBC_C_Tencent_2.bit >> IBC_D_Tencent_2.bit >> IBC_E_Tencent_1.bit >> LOSSLESS_B_HHI_3.bit >> > Will push tomorrow if there are no objections. > pushed. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] configure: select iamfenc as movenc dep
Unbreaks movenc compilation in minimal configuration. --- configure | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/configure b/configure index 197f762b58..2d0e6a444a 100755 --- a/configure +++ b/configure @@ -3554,7 +3554,7 @@ mlp_demuxer_select="mlp_parser" mmf_muxer_select="riffenc" mov_demuxer_select="iso_media riffdec iamfdec" mov_demuxer_suggest="zlib" -mov_muxer_select="iso_media riffenc rtpenc_chain vp9_superframe_bsf aac_adtstoasc_bsf ac3_parser" +mov_muxer_select="iso_media riffenc rtpenc_chain vp9_superframe_bsf aac_adtstoasc_bsf ac3_parser iamfenc" mp3_demuxer_select="mpegaudio_parser" mp3_muxer_select="mpegaudioheader" mp4_muxer_select="mov_muxer" -- 2.39.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 11/11] avcodec/vvcdec: add Intra Block Copy decoder
Hi, On Thu, Feb 22, 2024 at 2:15 AM Nuo Mi wrote: > +static void ibc_fill_vir_buf(const VVCLocalContext *lc, const CodingUnit > *cu) > [..] > +av_image_copy_plane(ibc_buf, ibc_stride, src, src_stride, > cu->cb_width >> hs << ps , cu->cb_height >> vs); > I'm admittedly not super-familiar with VVC, but I wonder why we need the double buffering here (from ref_pos in pic to ibc_buf, and then back from ibc_buf back to cur block in pic)? In AV1, this is done with just a single copy. Why is this done this way? Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] doc: Add infra.txt
On Thu, Feb 22, 2024 at 10:05:36PM +0100, Andreas Rheinhardt wrote: > Michael Niedermayer: > > --- > > doc/infra.txt | 99 +++ > > 1 file changed, 99 insertions(+) > > create mode 100644 doc/infra.txt > > > > diff --git a/doc/infra.txt b/doc/infra.txt > > new file mode 100644 > > index 00..5947a2c715 > > --- /dev/null > > +++ b/doc/infra.txt > > @@ -0,0 +1,99 @@ > > +FFmpeg Infrastructure: > > +== > > + > > + > > + > > + > > +Servers: > > + > > + > > + > > +Main Server: > > + > > +Our Main server is hosted at telepoint.bg > > +for more details see: https://www.ffmpeg.org/#thanks_sponsor_0001 > > +Nothing runs on our main server directly, instead several VMs run on it. > > + > > + > > +ffmpeg.org VM: > > +-- > > +Web, mail, and public facing git, also website git > > + > > + > > +fftrac VM: > > +-- > > +trac.ffmpeg.org Issue tracking > > + > > + > > +ffaux VM: > > +- > > +patchwork.ffmpeg.orgPatch tracking > > +vote.ffmpeg.org Condorcet voting > > + > > + > > +fate: > > +- > > +fate.ffmpeg.org FFmpeg automated testing environment > > + > > + > > +All servers currently run ubuntu > > + > > + > > + > > +Cronjobs: > > +~ > > +Part of the docs is in the main ffmpeg repository as texi files, this part > > is build by a cronjob. So is the > > +doxygen stuff as well as the FFmpeg git snapshot. > > +These 3 scripts are under the ffcron user > > + > > + > > + > > +Git: > > + > > +Public facing git is provided by our infra, (https://git.ffmpeg.org/gitweb) > > +main developer ffmpeg git repository for historic reasons is provided by > > (g...@source.ffmpeg.org:ffmpeg) > > +Other developer git repositories are provided via > > g...@git.ffmpeg.org: > > +git mirrors are available on https://github.com/FFmpeg > > +(there are some exceptions where primary repositories are on github or > > elsewhere instead of the mirrors) > > + > > +Github mirrors are redundantly synced by multiple people > > + > > +You need a new git repository related to FFmpeg ? contact root at > > ffmpeg.org > > + > > + > > +Fate: > > +~ > > +fatesamples are provided via rsync. Every FFmpeg developer who has a shell > > account in ffmepg.org > > ffmpeg.org > > > +should be in the samples group and be able to upload samples. > > +See > > https://www.ffmpeg.org/fate.html#Uploading-new-samples-to-the-fate-suite > > + > > + > > + > > +Accounts: > > +~ > > +You need an account for some FFmpeg work? Send mail to root at ffmpeg.org > > + > > + > > + > > +VMs: > > + > > +You need a VM, docker container for FFmpeg? contact root at ffmpeg.org > > +(for docker, CC Andriy) > > + > > + > > + > > +IRC: > > + > > +irc channels are at https://libera.chat/ > > +irc channel archieves are at https://libera.irclog.whitequark.org > > archives > > > + > > + > > + > > +Currently open positions: > > +~ > > +volunteer postfix / mailman expert. > > +You will need to do a mailman2 -> 3 update, old archieve links must > > continue to work > > archive > > > +Some evaluation and improvments of DMARC/DKIM/... handling > > improvements > > > +general long term maintaince of the mailman lists > > maintenance will apply with fixes and without the "open positions" because that seems not to fit in the ffmpeg-git that well. thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Does the universe only have a finite lifespan? No, its going to go on forever, its just that you wont like living in it. -- Hiranya Peiri signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/mxfenc: add h264_mp4toannexb bitstream filter if needed when muxing h264
Tomas Härdin: +static int mxf_check_bitstream(AVFormatContext *s, AVStream *st, const AVPacket *pkt) +{ + if (st->codecpar->codec_id == AV_CODEC_ID_H264) { + if (pkt->size >= 5 && AV_RB32(pkt->data) != 0x001 && + AV_RB24(pkt->data) != 0x01) + return ff_stream_add_bitstream_filter(st, "h264_mp4toannexb", NULL); > > Regardless of the comments below, this is wrong. ST 381-3 says this: > >> The byte stream format can be constructed from the NAL unit stream by >> prefixing each NAL unit with a start >> code prefix and zero or more zero-valued bytes to form a stream of >> bytes. > > Note the wording is "zero or more", not "zero or one". IMO all the code should only look at extradata to decide whether a stream is annex B or ISOBMFF (no extradata->annex B, no ISOBMFF extradata->annex B, else ISOBMFF). But that is a separate issue. (There is a slight possibility of misdetection here: E.g. a 0x00 00 01 at the start of a packet can actually be the start of the length code of an ISOBMFF NALU with length in the range 256-511; on the other hand, it is legal for an annex B packet to start with four or more zero bytes, as you mentioned.) > The correct way to do this is to inspect byte 14 of the EC UL, per > section 8.1 of ST 381-3. This is a patch for the muxer, not the demuxer. There is no byte 14 of the EC UL to inspect; or at least: It is what this muxer writes for it. This muxer always indicates that the output is an annex B (aka AVC byte stream), so it should always convert the input from the user to actually be annex B. >>> I sent the very same patch long ago [1]. Tomas Härdin opposed it >>> [2], >>> [3], because he sees stuff like this as hack. > > No, I oppose it because it is potentially against spec. The MXF > ecosystem is bad enough as it is without us encouraging out-of-spec > behavior. If the user's input is ISOBMFF, then the output will definitely be against spec without a conversion. With a patch like this ISOBMFF data will be converted to annex B. Anyway, FFmpeg aims to support two framings for H.264: Annex B and ISOBMFF. Sending ISOBMFF-framed data to a muxer is therefore not "out-of-spec behavior". It is just supposed to work and the onus is on the muxer/libavformat to convert as necessary. Also note that other missing checks for whether the input is really conforming to the specs should be separate from inserting this BSF. After all, the user could insert the BSF himself and even in this case it would be this muxer's responsibility to ensure that the output is spec-compliant. Inserting the BSF simplifies this task, because it means that the muxer can assume that the input is already annex B (and does not need separate logic for handling ISOBMFF input); that is the only point of inserting it. > Any behavior we put in to handle out-of-spec behavior should be limited > by Identification. But even that would be making our responsibility > what is really the responsibility of companies making broken MXF > muxers. Once again: This is a muxer, we do not parse and identify a file here. For the same reason it makes no sense to complain about other companies' broken MXF muxers. - Andreas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] avcodec/cbs_h2645: Avoid function pointer casts, fix UB
Mark Thompson: > On 21/02/2024 23:32, Andreas Rheinhardt wrote: >> The SEI message read/write functions are called >> via function pointers where the SEI message-specific >> context is passed as void*. But the actual function >> definitions use a pointer to their proper context >> in place of void*, making the calls undefined behaviour. >> Clang UBSan 17 warns about this. >> >> This commit fixes this by making the functions match >> the type of the call. This reduced the number of failing >> FATE tests with UBSan from 164 to 85 here. >> >> Signed-off-by: Andreas Rheinhardt >> --- >> libavcodec/cbs_h264_syntax_template.c | 24 ++--- >> libavcodec/cbs_h265_syntax_template.c | 31 +-- >> libavcodec/cbs_h266_syntax_template.c | 6 +++--- >> libavcodec/cbs_sei.h | 8 +++ >> libavcodec/cbs_sei_syntax_template.c | 23 >> 5 files changed, 52 insertions(+), 40 deletions(-) >> >> diff --git a/libavcodec/cbs_h264_syntax_template.c >> b/libavcodec/cbs_h264_syntax_template.c >> index 0f8bba4a0d..282cd24292 100644 >> --- a/libavcodec/cbs_h264_syntax_template.c >> +++ b/libavcodec/cbs_h264_syntax_template.c >> @@ -511,9 +511,9 @@ static int FUNC(pps)(CodedBitstreamContext *ctx, >> RWContext *rw, >> } >> static int FUNC(sei_buffering_period)(CodedBitstreamContext *ctx, >> RWContext *rw, >> - H264RawSEIBufferingPeriod >> *current, >> - SEIMessageState *sei) >> + void *current_, SEIMessageState >> *sei) >> { >> + H264RawSEIBufferingPeriod *current = current_; >> CodedBitstreamH264Context *h264 = ctx->priv_data; >> const H264RawSPS *sps; >> int err, i, length; >> @@ -605,9 +605,9 @@ static int >> FUNC(sei_pic_timestamp)(CodedBitstreamContext *ctx, RWContext *rw, >> } >> static int FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, >> RWContext *rw, >> - H264RawSEIPicTiming *current, >> - SEIMessageState *sei) >> + void *current_, SEIMessageState *sei) >> { >> + H264RawSEIPicTiming *current = current_; >> CodedBitstreamH264Context *h264 = ctx->priv_data; >> const H264RawSPS *sps; >> int err; >> @@ -677,9 +677,9 @@ static int >> FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, RWContext *rw, >> } >> static int FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, >> RWContext *rw, >> - H264RawSEIPanScanRect *current, >> - SEIMessageState *sei) >> + void *current_, SEIMessageState *sei) >> { >> + H264RawSEIPanScanRect *current = current_; >> int err, i; >> HEADER("Pan-Scan Rectangle"); >> @@ -704,9 +704,9 @@ static int >> FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, RWContext *rw, >> } >> static int FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, >> RWContext *rw, >> - H264RawSEIRecoveryPoint *current, >> - SEIMessageState *sei) >> + void *current_, SEIMessageState >> *sei) >> { >> + H264RawSEIRecoveryPoint *current = current_; >> int err; >> HEADER("Recovery Point"); >> @@ -720,9 +720,9 @@ static int >> FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, RWContext *rw, >> } >> static int FUNC(film_grain_characteristics)(CodedBitstreamContext >> *ctx, RWContext *rw, >> - >> H264RawFilmGrainCharacteristics *current, >> - SEIMessageState *state) >> + void *current_, >> SEIMessageState *state) >> { >> + H264RawFilmGrainCharacteristics *current = current_; >> CodedBitstreamH264Context *h264 = ctx->priv_data; >> const H264RawSPS *sps; >> int err, c, i, j; >> @@ -803,9 +803,9 @@ static int >> FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContex >> } >> static int FUNC(sei_display_orientation)(CodedBitstreamContext >> *ctx, RWContext *rw, >> - H264RawSEIDisplayOrientation >> *current, >> - SEIMessageState *sei) >> + void *current_, >> SEIMessageState *sei) >> { >> + H264RawSEIDisplayOrientation *current = current_; >> int err; >> HEADER("Display Orientation"); >> diff --git a/libavcodec/cbs_h265_syntax_template.c >> b/libavcodec/cbs_h265_syntax_template.c >> index 2d4b954718..53ae0cabff 100644 >> --- a/libavcodec/cbs_h265_syntax_template.c >> +++ b/libavcodec/cbs_h265_syntax_template.c >> @@ -1620,8 +1620,9 @@ static int >> FUNC(slice_segment_header)(CodedBitstreamContext *ctx, RWContext *rw, >> static int F
[FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. This fixes trac #10294, #10867 --- libavfilter/convolution.h| 3 ++- libavfilter/vf_convolution.c | 9 + 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h index e44bfb5da8..ee7477ef89 100644 --- a/libavfilter/convolution.h +++ b/libavfilter/convolution.h @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { const AVClass *class; char *matrix_str[4]; -float rdiv[4]; +float user_rdiv[4]; float bias[4]; int mode[4]; float scale; float delta; int planes; +float rdiv[4]; int size[4]; int depth; int max; diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c index a00bb2b3c4..96c478d791 100644 --- a/libavfilter/vf_convolution.c +++ b/libavfilter/vf_convolution.c @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "0bias", "set bias for 1st plane", OFFSET(bias[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, @@ -669,6 +669,7 @@ static int param_init(AVFilterContext *ctx) for (i = 0; i < 4; i++) { int *matrix = (int *)s->matrix[i]; char *orig, *p, *arg, *saveptr = NULL; +s->rdiv[i] = s->user_rdiv[i]; float sum = 1.f; p = orig = av_strdup(s->matrix_str[i]); -- 2.43.2 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] doc: Add infra.txt
Hi, On Thu, Feb 22, 2024 at 3:54 PM Michael Niedermayer wrote: > +Fate: +~ > +fatesamples are provided via rsync. Every FFmpeg developer who has a > shell account in ffmepg.org > +should be in the samples group and be able to upload samples. > +See > https://www.ffmpeg.org/fate.html#Uploading-new-samples-to-the-fate-suite > Is there a public list of "who has a shell account on ffmpeg.org" that can be linked to here? The goal here is to help people on IRC who ask "I'd like to add a sample to fate, how do I do that?" in some other way than "ask Michael". (I know the list, but I'd like a public place so everyone can find it, not just me.) It would be useful to also add info on who the roots are. This regularly comes up and having public disclosure means we don't have to argue about it or resort to "ask Michael" :) Thanks for working on this! Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
Sorry I just realized I messed up my git commit (new to git), I've attached a patch file with that correction. On Sat, Feb 24, 2024 at 10:49 AM Stone Chen wrote: > Previously to support dynamic reconfigurations of the matrix string (e.g. > 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be > recalculated based on the new filter. This however had the side effect of > always ignoring user specified rdiv values. > > Instead float user_rdiv[0] is added to ConvolutionContext which will store > the user specified rdiv values. Then the original rdiv array will store > either the user_rdiv or the automatically calculated 1/sum. > > This fixes trac #10294, #10867 > --- > libavfilter/convolution.h| 3 ++- > libavfilter/vf_convolution.c | 9 + > 2 files changed, 7 insertions(+), 5 deletions(-) > > diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h > index e44bfb5da8..ee7477ef89 100644 > --- a/libavfilter/convolution.h > +++ b/libavfilter/convolution.h > @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { > const AVClass *class; > > char *matrix_str[4]; > -float rdiv[4]; > +float user_rdiv[4]; > float bias[4]; > int mode[4]; > float scale; > float delta; > int planes; > > +float rdiv[4]; > int size[4]; > int depth; > int max; > diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c > index a00bb2b3c4..96c478d791 100644 > --- a/libavfilter/vf_convolution.c > +++ b/libavfilter/vf_convolution.c > @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { > { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "0bias", "set bias for 1st plane", OFFSET(bias[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > @@ -669,6 +669,7 @@ static int param_init(AVFilterContext *ctx) > for (i = 0; i < 4; i++) { > int *matrix = (int *)s->matrix[i]; > char *orig, *p, *arg, *saveptr = NULL; > +s->rdiv[i] = s->user_rdiv[i]; > float sum = 1.f; > > p = orig = av_strdup(s->matrix_str[i]); > -- > 2.43.2 > > From d489e66c7f1ea94ef302759566ee2b22b7895b86 Mon Sep 17 00:00:00 2001 From: Stone Chen Date: Sat, 24 Feb 2024 11:08:02 -0500 Subject: [PATCH] Add float user_rdiv[4] to allow user options to apply correctly Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. This fixes trac #10294, #10867 Signed-off-by: Stone Chen --- libavfilter/convolution.h| 3 ++- libavfilter/vf_convolution.c | 10 +- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h index e44bfb5da8..ee7477ef89 100644 --- a/libavfilter/convolution.h +++ b/libavfilter/convolution.h @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { const AVClass *class; char *matrix_str[4]; -float rdiv[4]; +float user_rdiv[4]; float bias[4]; int mode[4]; float scale; float delta; int planes; +float rdiv[4]; int size[4
[FFmpeg-devel] [RFC] fateserver
Hi all Both fateserver and the fateserver rewrite lack a mainteiner The original: https://fate.ffmpeg.org/ perl code here: https://git.ffmpeg.org/fateserver The rewrite (from Timothy Gu unmaintained since 2016) https://fatebeta.ffmpeg.org/ https://github.com/TimothyGu/fateserver-node Really only one of these need to be maintained, but one needs to be If you know someone who may be interersted in this, please tell me The situation ATM, where when someone reports an issue, theres noone responsible, noone taking care of it even with simple issues. Is really not good thx -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Complexity theory is the science of finding the exact solution to an approximation. Benchmarking OTOH is finding an approximation of the exact signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] doc: Add infra.txt
Hi On Sat, Feb 24, 2024 at 10:56:49AM -0500, Ronald S. Bultje wrote: > Hi, > > On Thu, Feb 22, 2024 at 3:54 PM Michael Niedermayer > wrote: > > > +Fate: > > +~ > > +fatesamples are provided via rsync. Every FFmpeg developer who has a > > shell account in ffmepg.org > > +should be in the samples group and be able to upload samples. > > +See > > https://www.ffmpeg.org/fate.html#Uploading-new-samples-to-the-fate-suite > > > > Is there a public list of "who has a shell account on ffmpeg.org" that can > be linked to here? The goal here is to help people on IRC who ask "I'd like > to add a sample to fate, how do I do that?" in some other way than "ask > Michael". (I know the list, but I'd like a public place so everyone can > find it, not just me.) Thats a valid reason to have an account try rsbultje on ffmpeg.org your git ssh key should work to authenticate see /home/* for who has an account and /etc/group for who is in samples i just now noticed some people who recently got accounts where forgotten to be added top samples. you can now monitor exactly who has accounts and who is in samples. Ask them, make it public, remind me if i forget adding someone to samlpes which it seems i have severl times. Its quite easy to forget a adduser user samples so if i had to maintain a public list, that list would never be reliable i would keep forgetting either adding people to it or add people and then forget actually opening their account :) > > It would be useful to also add info on who the roots are. This regularly > comes up and having public disclosure means we don't have to argue about it > or resort to "ask Michael" :) the roots should be listed in MAINTAINERs in addition ubitux, tim nicholochson, roberto togni and thresh seem to have access too. I dont know why they are not listed in MAINTAINERs, maybe they removed themselfs, or wheer never added ... thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB If you think the mosad wants you dead since a long time then you are either wrong or dead since a long time. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] doc: Add infra.txt
Hi, On Sat, Feb 24, 2024 at 1:28 PM Michael Niedermayer wrote: > so if i had to maintain a public list, that list would never be reliable > I can continue updating it. The current list is: $ ssh ffmpeg.org cat /etc/group|grep samples|tr ':' '\n'|grep ,|tr ',' ' ' compn cehoyos reimar beastd llogan ubitux durandal nevcairiel daemon404 jamrial martinvignali thilo atomnuker pross elenril rsbultje lynne jdek andriy rathann rtogni reynaldo I volunteer to run that script once a year and update doc/infra.txt accordingly. Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/3] avcodec/x86: disable hevc 12b luma deblock
Martin Storsjö writes: > [...] > > Are you sure the input is within valid range? It's always possible that > checkasm produces inputs that the real decoder wouldn't - but it's also > possible that this is a real decoder bug that just hasn't been triggered by > any > other test yet. > > // Martin The checkasm was just written to just to trigger all the theoretical edgecases. I know there is a decent range of values which pass the d0 + d3 < beta check and overflow in (9 * (q0 - p0) - 3 * (q1 - p1) + 8) for int16_t. I'm not 100% sure that these values can be output by the decoder, and even if so they're rare. -- jd ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v4 1/2] lavu/hashtable: create generic robin hood hash table
Signed-off-by: Connor Worley --- libavutil/Makefile | 2 + libavutil/hashtable.c | 192 libavutil/hashtable.h | 40 libavutil/tests/hashtable.c | 110 + 4 files changed, 344 insertions(+) create mode 100644 libavutil/hashtable.c create mode 100644 libavutil/hashtable.h create mode 100644 libavutil/tests/hashtable.c diff --git a/libavutil/Makefile b/libavutil/Makefile index e7709b97d0..be75d464fc 100644 --- a/libavutil/Makefile +++ b/libavutil/Makefile @@ -138,6 +138,7 @@ OBJS = adler32.o \ fixed_dsp.o \ frame.o \ hash.o \ + hashtable.o \ hdr_dynamic_metadata.o \ hdr_dynamic_vivid_metadata.o \ hmac.o \ @@ -251,6 +252,7 @@ TESTPROGS = adler32 \ file\ fifo\ hash\ +hashtable \ hmac\ hwdevice\ integer \ diff --git a/libavutil/hashtable.c b/libavutil/hashtable.c new file mode 100644 index 00..155a264665 --- /dev/null +++ b/libavutil/hashtable.c @@ -0,0 +1,192 @@ +/* + * Generic hashtable + * Copyright (C) 2024 Connor Worley + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include +#include + +#include "crc.h" +#include "error.h" +#include "mem.h" +#include "hashtable.h" + +#define ALIGN _Alignof(size_t) + +struct AVHashtableContext { +size_t key_size; +size_t key_size_aligned; +size_t val_size; +size_t val_size_aligned; +size_t entry_size; +size_t max_entries; +size_t utilization; +const AVCRC *crc; +uint8_t *table; +uint8_t *swapbuf; +}; + +#define ENTRY_PSL(entry) (entry) +#define ENTRY_OCC(entry) (ENTRY_PSL(entry) + FFALIGN(sizeof(size_t), ALIGN)) +#define ENTRY_KEY(entry) (ENTRY_OCC(entry) + FFALIGN(sizeof(size_t), ALIGN)) +#define ENTRY_VAL(entry) (ENTRY_KEY(entry) + ctx->key_size_aligned) + +#define KEYS_EQUAL(k1, k2) !memcmp(k1, k2, ctx->key_size) + +int av_hashtable_alloc(struct AVHashtableContext **ctx, size_t key_size, size_t val_size, size_t max_entries) +{ +struct AVHashtableContext *res = av_malloc(sizeof(struct AVHashtableContext)); +if (!res) +return AVERROR(ENOMEM); +res->key_size = key_size; +res->key_size_aligned = FFALIGN(key_size, ALIGN); +res->val_size = val_size; +res->val_size_aligned = FFALIGN(val_size, ALIGN); +res->entry_size = FFALIGN(sizeof(size_t), ALIGN) ++ FFALIGN(sizeof(size_t), ALIGN) ++ res->key_size_aligned ++ res->val_size_aligned; +res->max_entries = max_entries; +res->utilization = 0; +res->crc = av_crc_get_table(AV_CRC_32_IEEE); +if (!res->crc) { +av_hashtable_freep(&res); +return AVERROR_BUG; +} +res->table = av_calloc(res->max_entries, res->entry_size); +if (!res->table) { +av_hashtable_freep(&res); +return AVERROR(ENOMEM); +} +res->swapbuf = av_calloc(2, res->key_size_aligned + res->val_size_aligned); +if (!res->swapbuf) { +av_hashtable_freep(&res); +return AVERROR(ENOMEM); +} +*ctx = res; +return 0; +} + +static size_t hash_key(const struct AVHashtableContext *ctx, const void *key) +{ +return av_crc(ctx->crc, 0, key, ctx->key_size) % ctx->max_entries; +} + +int av_hashtable_get(const struct AVHashtableContext *ctx, const void *key, void
[FFmpeg-devel] [PATCH v4 2/2] lavc/dxvenc: migrate DXT1 encoder to lavu hashtable
Offers a modest performance gain due to the switch from naive linear probling to robin hood. Signed-off-by: Connor Worley --- libavcodec/dxvenc.c | 121 1 file changed, 33 insertions(+), 88 deletions(-) diff --git a/libavcodec/dxvenc.c b/libavcodec/dxvenc.c index 1ce2b1d014..980269657d 100644 --- a/libavcodec/dxvenc.c +++ b/libavcodec/dxvenc.c @@ -21,7 +21,7 @@ #include -#include "libavutil/crc.h" +#include "libavutil/hashtable.h" #include "libavutil/imgutils.h" #include "libavutil/opt.h" @@ -38,72 +38,9 @@ * appeared in the decompressed stream. Using a simple hash table (HT) * significantly speeds up the lookback process while encoding. */ -#define LOOKBACK_HT_ELEMS 0x4 +#define LOOKBACK_HT_ELEMS 0x20202 #define LOOKBACK_WORDS0x20202 -typedef struct HTEntry { -uint32_t key; -uint32_t pos; -} HTEntry; - -static void ht_init(HTEntry *ht) -{ -for (size_t i = 0; i < LOOKBACK_HT_ELEMS; i++) { -ht[i].pos = -1; -} -} - -static uint32_t ht_lookup_and_upsert(HTEntry *ht, const AVCRC *hash_ctx, -uint32_t key, uint32_t pos) -{ -uint32_t ret = -1; -size_t hash = av_crc(hash_ctx, 0, (uint8_t*)&key, 4) % LOOKBACK_HT_ELEMS; -for (size_t i = hash; i < hash + LOOKBACK_HT_ELEMS; i++) { -size_t wrapped_index = i % LOOKBACK_HT_ELEMS; -HTEntry *entry = &ht[wrapped_index]; -if (entry->key == key || entry->pos == -1) { -ret = entry->pos; -entry->key = key; -entry->pos = pos; -break; -} -} -return ret; -} - -static void ht_delete(HTEntry *ht, const AVCRC *hash_ctx, - uint32_t key, uint32_t pos) -{ -HTEntry *removed_entry = NULL; -size_t removed_hash; -size_t hash = av_crc(hash_ctx, 0, (uint8_t*)&key, 4) % LOOKBACK_HT_ELEMS; - -for (size_t i = hash; i < hash + LOOKBACK_HT_ELEMS; i++) { -size_t wrapped_index = i % LOOKBACK_HT_ELEMS; -HTEntry *entry = &ht[wrapped_index]; -if (entry->pos == -1) -return; -if (removed_entry) { -size_t candidate_hash = av_crc(hash_ctx, 0, (uint8_t*)&entry->key, 4) % LOOKBACK_HT_ELEMS; -if ((wrapped_index > removed_hash && (candidate_hash <= removed_hash || candidate_hash > wrapped_index)) || -(wrapped_index < removed_hash && (candidate_hash <= removed_hash && candidate_hash > wrapped_index))) { -*removed_entry = *entry; -entry->pos = -1; -removed_entry = entry; -removed_hash = wrapped_index; -} -} else if (entry->key == key) { -if (entry->pos <= pos) { -entry->pos = -1; -removed_entry = entry; -removed_hash = wrapped_index; -} else { -return; -} -} -} -} - typedef struct DXVEncContext { AVClass *class; @@ -120,10 +57,8 @@ typedef struct DXVEncContext { DXVTextureFormat tex_fmt; int (*compress_tex)(AVCodecContext *avctx); -const AVCRC *crc_ctx; - -HTEntry color_lookback_ht[LOOKBACK_HT_ELEMS]; -HTEntry lut_lookback_ht[LOOKBACK_HT_ELEMS]; +struct AVHashtableContext *color_ht; +struct AVHashtableContext *lut_ht; } DXVEncContext; /* Converts an index offset value to a 2-bit opcode and pushes it to a stream. @@ -158,27 +93,32 @@ static int dxv_compress_dxt1(AVCodecContext *avctx) DXVEncContext *ctx = avctx->priv_data; PutByteContext *pbc = &ctx->pbc; uint32_t *value; -uint32_t color, lut, idx, color_idx, lut_idx, prev_pos, state = 16, pos = 2, op = 0; +uint32_t color, lut, idx, color_idx, lut_idx, prev_pos, state = 16, pos = 0, op = 0; -ht_init(ctx->color_lookback_ht); -ht_init(ctx->lut_lookback_ht); +av_hashtable_clear(ctx->color_ht); +av_hashtable_clear(ctx->lut_ht); bytestream2_put_le32(pbc, AV_RL32(ctx->tex_data)); +av_hashtable_set(ctx->color_ht, ctx->tex_data, &pos); +pos++; bytestream2_put_le32(pbc, AV_RL32(ctx->tex_data + 4)); - -ht_lookup_and_upsert(ctx->color_lookback_ht, ctx->crc_ctx, AV_RL32(ctx->tex_data), 0); -ht_lookup_and_upsert(ctx->lut_lookback_ht, ctx->crc_ctx, AV_RL32(ctx->tex_data + 4), 1); +av_hashtable_set(ctx->lut_ht, ctx->tex_data + 4, &pos); +pos++; while (pos + 2 <= ctx->tex_size / 4) { idx = 0; +color_idx = 0; +lut_idx = 0; color = AV_RL32(ctx->tex_data + pos * 4); -prev_pos = ht_lookup_and_upsert(ctx->color_lookback_ht, ctx->crc_ctx, color, pos); -color_idx = prev_pos != -1 ? pos - prev_pos : 0; +if (av_hashtable_get(ctx->color_ht, &color, &prev_pos)) +color_idx = pos - prev_pos; +av_hashtable_set(ctx->color_ht, &color, &pos); + if (pos >= LOOKBACK_WORDS) { uint32_t old_pos = pos - LOOKBACK_WORDS; -
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
On Sat, 24 Feb 2024, Stone Chen wrote: Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. This fixes trac #10294, #10867 Have you tested? Thanks, Marton --- libavfilter/convolution.h| 3 ++- libavfilter/vf_convolution.c | 9 + 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h index e44bfb5da8..ee7477ef89 100644 --- a/libavfilter/convolution.h +++ b/libavfilter/convolution.h @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { const AVClass *class; char *matrix_str[4]; -float rdiv[4]; +float user_rdiv[4]; float bias[4]; int mode[4]; float scale; float delta; int planes; +float rdiv[4]; int size[4]; int depth; int max; diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c index a00bb2b3c4..96c478d791 100644 --- a/libavfilter/vf_convolution.c +++ b/libavfilter/vf_convolution.c @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "0bias", "set bias for 1st plane", OFFSET(bias[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, @@ -669,6 +669,7 @@ static int param_init(AVFilterContext *ctx) for (i = 0; i < 4; i++) { int *matrix = (int *)s->matrix[i]; char *orig, *p, *arg, *saveptr = NULL; +s->rdiv[i] = s->user_rdiv[i]; float sum = 1.f; p = orig = av_strdup(s->matrix_str[i]); -- 2.43.2 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/libsrt: use SRT_EPOLL_IN for waiting for an incoming connection
On Tue, 20 Feb 2024, Marton Balint wrote: This is the proper poll mode for waiting for an incoming connection according to the SRT API docs. Fixes ticket #9142. Will apply. Regards, Marton Signed-off-by: Marton Balint --- libavformat/libsrt.c | 17 ++--- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/libavformat/libsrt.c b/libavformat/libsrt.c index 56acb6e741..0edea9266d 100644 --- a/libavformat/libsrt.c +++ b/libavformat/libsrt.c @@ -250,7 +250,7 @@ static int libsrt_listen(int eid, int fd, const struct sockaddr *addr, socklen_t if (srt_listen(fd, 1)) return libsrt_neterrno(h); -ret = libsrt_network_wait_fd_timeout(h, eid, 1, timeout, &h->interrupt_callback); +ret = libsrt_network_wait_fd_timeout(h, eid, 0, timeout, &h->interrupt_callback); if (ret < 0) return ret; @@ -391,7 +391,7 @@ static int libsrt_setup(URLContext *h, const char *uri, int flags) char hostname[1024],proto[1024],path[1024]; char portstr[10]; int64_t open_timeout = 0; -int eid, write_eid; +int eid; av_url_split(proto, sizeof(proto), NULL, 0, hostname, sizeof(hostname), &port, path, sizeof(path), uri); @@ -455,18 +455,21 @@ static int libsrt_setup(URLContext *h, const char *uri, int flags) if (libsrt_socket_nonblock(fd, 1) < 0) av_log(h, AV_LOG_DEBUG, "libsrt_socket_nonblock failed\n"); -ret = write_eid = libsrt_epoll_create(h, fd, 1); -if (ret < 0) -goto fail1; if (s->mode == SRT_MODE_LISTENER) { +int read_eid = ret = libsrt_epoll_create(h, fd, 0); +if (ret < 0) +goto fail1; // multi-client -ret = libsrt_listen(write_eid, fd, cur_ai->ai_addr, cur_ai->ai_addrlen, h, s->listen_timeout); -srt_epoll_release(write_eid); +ret = libsrt_listen(read_eid, fd, cur_ai->ai_addr, cur_ai->ai_addrlen, h, s->listen_timeout); +srt_epoll_release(read_eid); if (ret < 0) goto fail1; srt_close(fd); fd = ret; } else { +int write_eid = ret = libsrt_epoll_create(h, fd, 1); +if (ret < 0) +goto fail1; if (s->mode == SRT_MODE_RENDEZVOUS) { if (srt_bind(fd, cur_ai->ai_addr, cur_ai->ai_addrlen)) { ret = libsrt_neterrno(h); -- 2.35.3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] fate/mxf: fix mxf-probe-j2k on big endian systems
On Fri, 23 Feb 2024, Tomas Härdin wrote: tor 2024-02-22 klockan 00:34 +0100 skrev Marton Balint: Jpeg2000 decoder is decoding in native endian, so let's use the same workaround as in fate-mxf-probe-applehdr10. Fixes ticket #10868. OK I suppose Ok, thanks, will apply. Regards, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
On Sat, Feb 24, 2024 at 3:56 PM Marton Balint wrote: > > > On Sat, 24 Feb 2024, Stone Chen wrote: > > > Previously to support dynamic reconfigurations of the matrix string > (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv > to be recalculated based on the new filter. This however had the side > effect of always ignoring user specified rdiv values. > > > > Instead float user_rdiv[0] is added to ConvolutionContext which will > store the user specified rdiv values. Then the original rdiv array will > store either the user_rdiv or the automatically calculated 1/sum. > > > > This fixes trac #10294, #10867 > > Have you tested? > > Thanks, > Marton > Hi Marton, Yes I've tested that - the original behavior works (automatically calculate rdiv if user_rdiv = 0) - setting the rdiv value works - both work via sendcmd - make fate runs with no errors Cheers, Stone > > --- > > libavfilter/convolution.h| 3 ++- > > libavfilter/vf_convolution.c | 9 + > > 2 files changed, 7 insertions(+), 5 deletions(-) > > > > diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h > > index e44bfb5da8..ee7477ef89 100644 > > --- a/libavfilter/convolution.h > > +++ b/libavfilter/convolution.h > > @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { > > const AVClass *class; > > > > char *matrix_str[4]; > > -float rdiv[4]; > > +float user_rdiv[4]; > > float bias[4]; > > int mode[4]; > > float scale; > > float delta; > > int planes; > > > > +float rdiv[4]; > > int size[4]; > > int depth; > > int max; > > diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c > > index a00bb2b3c4..96c478d791 100644 > > --- a/libavfilter/vf_convolution.c > > +++ b/libavfilter/vf_convolution.c > > @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { > > { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > > { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > > { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), > AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > > -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > { "0bias", "set bias for 1st plane", OFFSET(bias[0]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), > AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > > @@ -669,6 +669,7 @@ static int param_init(AVFilterContext *ctx) > > for (i = 0; i < 4; i++) { > > int *matrix = (int *)s->matrix[i]; > > char *orig, *p, *arg, *saveptr = NULL; > > +s->rdiv[i] = s->user_rdiv[i]; > > float sum = 1.f; > > > > p = orig = av_strdup(s->matrix_str[i]); > > -- > > 2.43.2 > > > > ___ > > ffmpeg-devel mailing list > > ffmpeg-devel@ffmpeg.org > > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > > > To unsubscribe, visit link above, or email > > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 14/38] lavu/opt: factor per-type dispatch out of av_opt_set()
On 2/23/2024 7:50 PM, Michael Niedermayer wrote: Hi Anton On Fri, Feb 23, 2024 at 02:58:36PM +0100, Anton Khirnov wrote: Will be useful in following commits. --- breaks: ./ffmpeg -y -request_channel_layout 3 -i bug/401/mlp_5point1_downmixof6channel.mlp -bitexact file-2-mlp_5point1_downmixof6channel.wav fwiw, request_channel_layout will be removed in the upcoming bump. You should use -downmix stereo. [mlp @ 0x55690e23ff80] Error setting option request_channel_layout to value 3. [mlp @ 0x55690e23ed00] Failed to open codec in avformat_find_stream_info [mlp @ 0x55690e23ff80] Error setting option request_channel_layout to value 3. Input #0, mlp, from 'bug/401/mlp_5point1_downmixof6channel.mlp': Duration: N/A, start: 0.00, bitrate: N/A Stream #0:0: Audio: mlp, 48000 Hz, 5.1, s32 (24 bit) [mlp @ 0x55690e257900] Error setting option request_channel_layout to value 3. [aist#0:0/mlp @ 0x55690e254dc0] [dec:mlp @ 0x55690e256f40] Error while opening decoder: Invalid argument [aost#0:0/pcm_s16le @ 0x55690e255a80] Error initializing a simple filtergraph Error opening output file file-2-mlp_5point1_downmixof6channel.wav. Error opening output files: Invalid argument i suspect this isnt specific to the file but i can provide it if it doesnt reproduce with a other file thx [...] ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] fftools/ffmpeg_mux: Fix use of uninitialized variable
Broken in a2fc86378a18b2c2966ce3438df8f27f646438e5. Signed-off-by: Andreas Rheinhardt --- fftools/ffmpeg_mux.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c index 2c01aec665..335b8b42ca 100644 --- a/fftools/ffmpeg_mux.c +++ b/fftools/ffmpeg_mux.c @@ -506,14 +506,14 @@ int print_sdp(const char *filename); int print_sdp(const char *filename) { char sdp[16384]; -int j, ret; +int j = 0, ret; AVIOContext *sdp_pb; AVFormatContext **avc; avc = av_malloc_array(nb_output_files, sizeof(*avc)); if (!avc) return AVERROR(ENOMEM); -for (int i = 0, j = 0; i < nb_output_files; i++) { +for (int i = 0; i < nb_output_files; i++) { if (!strcmp(output_files[i]->format->name, "rtp")) { avc[j] = mux_from_of(output_files[i])->fc; j++; -- 2.40.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
On Sat, 24 Feb 2024, Stone Chen wrote: On Sat, Feb 24, 2024 at 3:56 PM Marton Balint wrote: On Sat, 24 Feb 2024, Stone Chen wrote: > Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. > > Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. > > This fixes trac #10294, #10867 Have you tested? Thanks, Marton Hi Marton, Yes I've tested that - the original behavior works (automatically calculate rdiv if user_rdiv = 0) - setting the rdiv value works It does not work for me even after applying your patch. Regards, Marton - both work via sendcmd - make fate runs with no errors Cheers, Stone > --- > libavfilter/convolution.h| 3 ++- > libavfilter/vf_convolution.c | 9 + > 2 files changed, 7 insertions(+), 5 deletions(-) > > diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h > index e44bfb5da8..ee7477ef89 100644 > --- a/libavfilter/convolution.h > +++ b/libavfilter/convolution.h > @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { > const AVClass *class; > > char *matrix_str[4]; > -float rdiv[4]; > +float user_rdiv[4]; > float bias[4]; > int mode[4]; > float scale; > float delta; > int planes; > > +float rdiv[4]; > int size[4]; > int depth; > int max; > diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c > index a00bb2b3c4..96c478d791 100644 > --- a/libavfilter/vf_convolution.c > +++ b/libavfilter/vf_convolution.c > @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { > { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, > -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "0bias", "set bias for 1st plane", OFFSET(bias[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, > @@ -669,6 +669,7 @@ static int param_init(AVFilterContext *ctx) > for (i = 0; i < 4; i++) { > int *matrix = (int *)s->matrix[i]; > char *orig, *p, *arg, *saveptr = NULL; > +s->rdiv[i] = s->user_rdiv[i]; > float sum = 1.f; > > p = orig = av_strdup(s->matrix_str[i]); > -- > 2.43.2 > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] avfilter/avfilter: Suppress warning for variable only used in av_assert1
Forgotten in e7f9edb4698e94135aab24c302226734713548f0. Signed-off-by: Andreas Rheinhardt --- libavfilter/avfilter.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libavfilter/avfilter.c b/libavfilter/avfilter.c index 3aa20085ed..daa7c3672a 100644 --- a/libavfilter/avfilter.c +++ b/libavfilter/avfilter.c @@ -1563,7 +1563,7 @@ FF_ENABLE_DEPRECATION_WARNINGS void ff_inlink_request_frame(AVFilterLink *link) { -FilterLinkInternal *li = ff_link_internal(link); +av_unused FilterLinkInternal *li = ff_link_internal(link); av_assert1(!li->status_in); av_assert1(!li->status_out); link->frame_wanted_out = 1; -- 2.40.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
On Sat, Feb 24, 2024 at 6:34 PM Marton Balint wrote: > > > On Sat, 24 Feb 2024, Stone Chen wrote: > > > On Sat, Feb 24, 2024 at 3:56 PM Marton Balint wrote: > > > >> > >> > >> On Sat, 24 Feb 2024, Stone Chen wrote: > >> > >> > Previously to support dynamic reconfigurations of the matrix string > >> (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the > rdiv > >> to be recalculated based on the new filter. This however had the side > >> effect of always ignoring user specified rdiv values. > >> > > >> > Instead float user_rdiv[0] is added to ConvolutionContext which will > >> store the user specified rdiv values. Then the original rdiv array will > >> store either the user_rdiv or the automatically calculated 1/sum. > >> > > >> > This fixes trac #10294, #10867 > >> > >> Have you tested? > >> > >> Thanks, > >> Marton > >> > > > > > > Hi Marton, > > > > Yes I've tested that > > > > - the original behavior works (automatically calculate rdiv if > user_rdiv > > = 0) > > - setting the rdiv value works > > It does not work for me even after applying your patch. > > Regards, > Marton > Hi Marton, Sorry about that, was it with the attached patch in the second email? I had botched my initial patch, I've re-attached the second to this email. If it was, I'll need to investigate further then, would you be able to send me what you did to test? Regards, Stone From d489e66c7f1ea94ef302759566ee2b22b7895b86 Mon Sep 17 00:00:00 2001 From: Stone Chen Date: Sat, 24 Feb 2024 11:08:02 -0500 Subject: [PATCH] Add float user_rdiv[4] to allow user options to apply correctly Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. This fixes trac #10294, #10867 Signed-off-by: Stone Chen --- libavfilter/convolution.h| 3 ++- libavfilter/vf_convolution.c | 10 +- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/libavfilter/convolution.h b/libavfilter/convolution.h index e44bfb5da8..ee7477ef89 100644 --- a/libavfilter/convolution.h +++ b/libavfilter/convolution.h @@ -34,13 +34,14 @@ typedef struct ConvolutionContext { const AVClass *class; char *matrix_str[4]; -float rdiv[4]; +float user_rdiv[4]; float bias[4]; int mode[4]; float scale; float delta; int planes; +float rdiv[4]; int size[4]; int depth; int max; diff --git a/libavfilter/vf_convolution.c b/libavfilter/vf_convolution.c index bf67f392f6..88b89289a9 100644 --- a/libavfilter/vf_convolution.c +++ b/libavfilter/vf_convolution.c @@ -40,10 +40,10 @@ static const AVOption convolution_options[] = { { "1m", "set matrix for 2nd plane", OFFSET(matrix_str[1]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "2m", "set matrix for 3rd plane", OFFSET(matrix_str[2]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, { "3m", "set matrix for 4th plane", OFFSET(matrix_str[3]), AV_OPT_TYPE_STRING, {.str="0 0 0 0 1 0 0 0 0"}, 0, 0, FLAGS }, -{ "0rdiv", "set rdiv for 1st plane", OFFSET(rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "1rdiv", "set rdiv for 2nd plane", OFFSET(rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "2rdiv", "set rdiv for 3rd plane", OFFSET(rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, -{ "3rdiv", "set rdiv for 4th plane", OFFSET(rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "0rdiv", "set rdiv for 1st plane", OFFSET(user_rdiv[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "1rdiv", "set rdiv for 2nd plane", OFFSET(user_rdiv[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "2rdiv", "set rdiv for 3rd plane", OFFSET(user_rdiv[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, +{ "3rdiv", "set rdiv for 4th plane", OFFSET(user_rdiv[3]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "0bias", "set bias for 1st plane", OFFSET(bias[0]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "1bias", "set bias for 2nd plane", OFFSET(bias[1]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, { "2bias", "set bias for 3rd plane", OFFSET(bias[2]), AV_OPT_TYPE_FLOAT, {.dbl=0.0}, 0.0, INT_MAX, FLAGS}, @@ -674,7 +674,7 @@ static int param_init(AVFilterContext *ctx) p = orig = av_strdup(s->matrix_str[i]); if (p) { s->matrix_length[i] = 0; -s->rdiv[i] = 0.f; +s->rdiv[i] = s->user_rdiv[i]; sum = 0.f; while (s->matrix_length[i] < 49) { -- 2.
Re: [FFmpeg-devel] [PATCH] Set native order for wav channel layouts up until 8 channels.
On Fri, Feb 23, 2024 at 02:41:06PM -0600, Romain Beauxis wrote: > The new default channel layout for the various RIFF/WAV decoders is not > backward compatible. > > Historically, most decoders will expect the channel layouts to follow > the native layout up-to a reasonable number of channels. > > Additionally, non-native layouts are causing troubles with filters > chaining. > > This PR changes the default channel layout reported by RIFF/WAV decoders > to default to the native layout when the number of channels is up-to 8. > > The logic for these changes is the same as the logic for the vorbis/opus > decoders. > > Romain breaks fate make -j32 fate-flcl1905 TESTflcl1905 --- ./tests/ref/fate/flcl1905 2024-02-09 03:32:32.540199565 +0100 +++ tests/data/fate/flcl19052024-02-25 02:26:51.079111678 +0100 @@ -1,192 +1,192 @@ packet|codec_type=audio|stream_index=0|pts=0|pts_time=0.00|dts=0|dts_time=0.00|duration=22528|duration_time=0.510839|size=4092|pos=56|flags=K__ -frame|media_type=audio|stream_index=0|key_frame=1|pts=N/A|pts_time=N/A|pkt_dts=N/A|pkt_dts_time=N/A|best_effort_timestamp=N/A|best_effort_timestamp_time=N/A|pkt_duration=22528|pkt_duration_time=0.510839|duration=22528|duration_time=0.510839|pkt_pos=56|pkt_size=4092|sample_fmt=fltp|nb_samples=2048|channels=2|channel_layout=unknown -frame|media_type=audio|stream_index=0|key_frame=1|pts=N/A|pts_time=N/A|pkt_dts=N/A|pkt_dts_time=N/A|best_effort_timestamp=N/A|best_effort_timestamp_time=N/A|pkt_duration=22528|pkt_duration_time=0.510839|duration=22528|duration_time=0.510839|pkt_pos=56|pkt_size=4092|sample_fmt=fltp|nb_samples=2048|channels=2|channel_layout=unknown -frame|media_type=audio|stream_index=0|key_frame=1|pts=N/A|pts_time=N/A|pkt_dts=N/A|pkt_dts_time=N/A|best_effort_timestamp=N/A|best_effort_timestamp_time=N/A|pkt_duration=22528|pkt_duration_time=0.510839|duration=22528|duration_time=0.510839|pkt_pos=56|pkt_size=4092|sample_fmt=fltp|nb_samples=2048|channels=2|channel_layout=unknown ... [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The real ebay dictionary, page 1 "Used only once"- "Some unspecified defect prevented a second use" "In good condition" - "Can be repaird by experienced expert" "As is" - "You wouldnt want it even if you were payed for it, if you knew ..." signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] avcodec/cbs_h2645: Avoid function pointer casts, fix UB
The SEI message read/write functions are called via function pointers where the SEI message-specific context is passed as void*. But the actual function definitions use a pointer to their proper context in place of void*, making the calls undefined behaviour. Clang UBSan 17 warns about this. This commit fixes this by adding wrapper functions (created via macros) that have the right type that call the actual functions. This reduced the number of failing FATE tests with UBSan from 164 to 85 here. Signed-off-by: Andreas Rheinhardt --- libavcodec/cbs_h2645.c| 15 +++ libavcodec/cbs_h264_syntax_template.c | 35 libavcodec/cbs_h265_syntax_template.c | 58 +-- libavcodec/cbs_h266_syntax_template.c | 8 ++-- libavcodec/cbs_sei.h | 7 libavcodec/cbs_sei_syntax_template.c | 47 +++--- 6 files changed, 88 insertions(+), 82 deletions(-) diff --git a/libavcodec/cbs_h2645.c b/libavcodec/cbs_h2645.c index 2fb249bcd3..8e4af7b2cc 100644 --- a/libavcodec/cbs_h2645.c +++ b/libavcodec/cbs_h2645.c @@ -235,6 +235,16 @@ static int cbs_h265_payload_extension_present(GetBitContext *gbc, uint32_t paylo #define FUNC_H266(name) FUNC_NAME1(READWRITE, h266, name) #define FUNC_SEI(name) FUNC_NAME1(READWRITE, sei, name) +#define SEI_FUNC(name, args) \ +static int FUNC(name) args; \ +static int FUNC(name ## _internal)(CodedBitstreamContext *ctx, \ + RWContext *rw, void *cur, \ + SEIMessageState *state) \ +{ \ +return FUNC(name)(ctx, rw, cur, state); \ +} \ +static int FUNC(name) args + #define SUBSCRIPTS(subs, ...) (subs > 0 ? ((int[subs + 1]){ subs, __VA_ARGS__ }) : NULL) #define u(width, name, range_min, range_max) \ @@ -2070,6 +2080,11 @@ const CodedBitstreamType ff_cbs_type_h266 = { .close = &cbs_h266_close, }; +// Macro for the read/write pair. +#define SEI_MESSAGE_RW(codec, name) \ +.read = cbs_ ## codec ## _read_ ## name ## _internal, \ +.write = cbs_ ## codec ## _write_ ## name ## _internal + static const SEIMessageTypeDescriptor cbs_sei_common_types[] = { { SEI_TYPE_FILLER_PAYLOAD, diff --git a/libavcodec/cbs_h264_syntax_template.c b/libavcodec/cbs_h264_syntax_template.c index 0f8bba4a0d..4d2d303722 100644 --- a/libavcodec/cbs_h264_syntax_template.c +++ b/libavcodec/cbs_h264_syntax_template.c @@ -510,9 +510,9 @@ static int FUNC(pps)(CodedBitstreamContext *ctx, RWContext *rw, return 0; } -static int FUNC(sei_buffering_period)(CodedBitstreamContext *ctx, RWContext *rw, - H264RawSEIBufferingPeriod *current, - SEIMessageState *sei) +SEI_FUNC(sei_buffering_period, (CodedBitstreamContext *ctx, RWContext *rw, +H264RawSEIBufferingPeriod *current, +SEIMessageState *sei)) { CodedBitstreamH264Context *h264 = ctx->priv_data; const H264RawSPS *sps; @@ -604,9 +604,8 @@ static int FUNC(sei_pic_timestamp)(CodedBitstreamContext *ctx, RWContext *rw, return 0; } -static int FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawSEIPicTiming *current, -SEIMessageState *sei) +SEI_FUNC(sei_pic_timing, (CodedBitstreamContext *ctx, RWContext *rw, + H264RawSEIPicTiming *current, SEIMessageState *sei)) { CodedBitstreamH264Context *h264 = ctx->priv_data; const H264RawSPS *sps; @@ -676,9 +675,9 @@ static int FUNC(sei_pic_timing)(CodedBitstreamContext *ctx, RWContext *rw, return 0; } -static int FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, RWContext *rw, - H264RawSEIPanScanRect *current, - SEIMessageState *sei) +SEI_FUNC(sei_pan_scan_rect, (CodedBitstreamContext *ctx, RWContext *rw, + H264RawSEIPanScanRect *current, + SEIMessageState *sei)) { int err, i; @@ -703,9 +702,9 @@ static int FUNC(sei_pan_scan_rect)(CodedBitstreamContext *ctx, RWContext *rw, return 0; } -static int FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawSEIRecoveryPoint *current, -SEIMessageState *sei) +SEI_FUNC(sei_recovery_point, (CodedBitstreamContext *ctx, RWContext *rw, + H264RawSEIRecoveryPoint *current, + SEIMessageState *sei)) { int err; @@ -719,9 +718,9 @@ static int FUNC(sei_recovery_point)(CodedBitstreamContext *ctx, RWContext *rw, return 0; } -static int FUNC(film_grain_characteristics)(CodedBitstreamContext *ctx, RWContext *rw, -H264RawFilmGrainCharacteristics *current, -
[FFmpeg-devel] [PATCH] Add support d3d11va Intel Hevc Rext decoder.
Signed-off-by: Aleksoid --- libavcodec/d3d12va_hevc.c | 2 +- libavcodec/dxva2.c| 68 +-- libavcodec/dxva2_hevc.c | 41 ++--- libavcodec/dxva2_internal.h | 38 +++- libavcodec/hevcdec.c | 16 + libavutil/hwcontext_d3d11va.c | 26 +++--- 6 files changed, 178 insertions(+), 13 deletions(-) diff --git a/libavcodec/d3d12va_hevc.c b/libavcodec/d3d12va_hevc.c index a4964a05c6..0912e01b7d 100644 --- a/libavcodec/d3d12va_hevc.c +++ b/libavcodec/d3d12va_hevc.c @@ -62,7 +62,7 @@ static int d3d12va_hevc_start_frame(AVCodecContext *avctx, av_unused const uint8 ctx->used_mask = 0; -ff_dxva2_hevc_fill_picture_parameters(avctx, (AVDXVAContext *)ctx, &ctx_pic->pp); +ff_dxva2_hevc_fill_picture_parameters(avctx, (AVDXVAContext *)ctx, (DXVA_PicParams_HEVC_Rext*)&ctx_pic->pp); ff_dxva2_hevc_fill_scaling_lists(avctx, (AVDXVAContext *)ctx, &ctx_pic->qm); diff --git a/libavcodec/dxva2.c b/libavcodec/dxva2.c index 59025633f7..a611989911 100644 --- a/libavcodec/dxva2.c +++ b/libavcodec/dxva2.c @@ -50,6 +50,13 @@ DEFINE_GUID(ff_DXVA2_NoEncrypt, 0x1b81beD0, 0xa0c7,0x11d3,0xb9,0x84,0x0 DEFINE_GUID(ff_GUID_NULL,0x, 0x,0x,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00); DEFINE_GUID(ff_IID_IDirectXVideoDecoderService, 0xfc51a551,0xd5e7,0x11d9,0xaf,0x55,0x00,0x05,0x4e,0x43,0xff,0x02); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main12_Intel, 0x8FF8A3AA, 0xC456, 0x4132, 0xB6, 0xEF, 0x69, 0xD9, 0xDD, 0x72, 0x57, 0x1D); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main422_10_Intel, 0xE484DCB8, 0xCAC9, 0x4859, 0x99, 0xF5, 0x5C, 0x0D, 0x45, 0x06, 0x90, 0x89); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main422_12_Intel, 0xC23DD857, 0x874B, 0x423C, 0xB6, 0xE0, 0x82, 0xCE, 0xAA, 0x9B, 0x11, 0x8A); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_Intel,0x41A5AF96, 0xE415, 0x4B0C, 0x9D, 0x03, 0x90, 0x78, 0x58, 0xE2, 0x3E, 0x78); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_10_Intel, 0x6A6A81BA, 0x912A, 0x485D, 0xB5, 0x7F, 0xCC, 0xD2, 0xD3, 0x7B, 0x8D, 0x94); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_12_Intel, 0x5B08E35D, 0x0C66, 0x4C51, 0xA6, 0xF1, 0x89, 0xD0, 0x0C, 0xB2, 0xC1, 0x97); + typedef struct dxva_mode { const GUID *guid; enum AVCodecID codec; @@ -75,6 +82,8 @@ static const int prof_vp9_profile2[] = {AV_PROFILE_VP9_2, AV_PROFILE_UNKNOWN}; static const int prof_av1_profile0[] = {AV_PROFILE_AV1_MAIN, AV_PROFILE_UNKNOWN}; +static const int prof_hevc_rext[]= {AV_PROFILE_HEVC_REXT, +AV_PROFILE_UNKNOWN}; static const dxva_mode dxva_modes[] = { /* MPEG-2 */ @@ -104,6 +113,14 @@ static const dxva_mode dxva_modes[] = { /* AV1 */ { &ff_DXVA2_ModeAV1_VLD_Profile0, AV_CODEC_ID_AV1, prof_av1_profile0 }, +/* HEVC/H.265 Rext */ +{ &ff_DXVA2_HEVC_VLD_Main12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main422_10_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main422_12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_Intel,AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_10_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, + { NULL, 0 }, }; @@ -301,6 +318,14 @@ static int dxva_get_decoder_guid(AVCodecContext *avctx, void *service, void *sur if (IsEqualGUID(decoder_guid, &ff_DXVADDI_Intel_ModeH264_E)) sctx->workaround |= FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO; +av_log(avctx, AV_LOG_VERBOSE, + "Used guid : {%8.8x-%4.4x-%4.4x-%2.2x%2.2x-%2.2x%2.2x%2.2x%2.2x%2.2x%2.2x}\n", + (unsigned)decoder_guid->Data1, decoder_guid->Data2, decoder_guid->Data3, + decoder_guid->Data4[0], decoder_guid->Data4[1], + decoder_guid->Data4[2], decoder_guid->Data4[3], + decoder_guid->Data4[4], decoder_guid->Data4[5], + decoder_guid->Data4[6], decoder_guid->Data4[7]); + return 0; } @@ -458,6 +483,13 @@ static DXGI_FORMAT d3d11va_map_sw_to_hw_format(enum AVPixelFormat pix_fmt) case AV_PIX_FMT_NV12: return DXGI_FORMAT_NV12; case AV_PIX_FMT_P010: return DXGI_FORMAT_P010; case AV_PIX_FMT_YUV420P:return DXGI_FORMAT_420_OPAQUE; +case AV_PIX_FMT_P016: return DXGI_FORMAT_P016; +case AV_PIX_FMT_YUYV422:return DXGI_FORMAT_YUY2; +case AV_PIX_FMT_Y210: return DXGI_FORMAT_Y210; +case AV_PIX_FMT_Y212: return DXGI_FORMAT_Y216; +case AV_PIX_FMT_VUYX: return DXGI_FORMAT_AYUV; +case AV_PIX_FMT_XV30: return DXGI_FORMAT_Y410; +case AV_PIX_FMT_XV36: return DXGI_FORMAT_Y416; default:return DXGI_FORMAT_UNKNOWN; } } @@ -589,6 +621,39 @@ static void ff_dxva2_unlock(A
[FFmpeg-devel] [PATCH] mov: parse track-based udta name tags
Signed-off-by: Aleksoid --- libavformat/mov.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/libavformat/mov.c b/libavformat/mov.c index 1a1b104615..c7b6919433 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -301,6 +301,16 @@ static int mov_metadata_hmmt(MOVContext *c, AVIOContext *pb, unsigned len) return 0; } + +static void mov_set_metadata(MOVContext *c, const char *key, const char *str) +{ +if (c->trak_index >= 0) { +AVStream *st = c->fc->streams[c->fc->nb_streams-1]; +av_dict_set(&st->metadata, key, str, 0); +} else +av_dict_set(&c->fc->metadata, key, str, 0); +} + static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) { char tmp_key[AV_FOURCC_MAX_STRING_SIZE] = {0}; @@ -403,6 +413,7 @@ static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) case MKTAG(0xa9,'w','r','n'): key = "warning"; break; case MKTAG(0xa9,'w','r','t'): key = "composer"; break; case MKTAG(0xa9,'x','y','z'): key = "location"; break; +case MKTAG( 'n','a','m','e'): key = "title"; break; } retry: if (c->itunes_metadata && atom.size > 8) { @@ -530,10 +541,10 @@ retry: str[str_size] = 0; } c->fc->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; -av_dict_set(&c->fc->metadata, key, str, 0); +mov_set_metadata(c, key, str); if (*language && strcmp(language, "und")) { snprintf(key2, sizeof(key2), "%s-%s", key, language); -av_dict_set(&c->fc->metadata, key2, str, 0); +mov_set_metadata(c, key2, str); } if (!strcmp(key, "encoder")) { int major, minor, micro; -- 2.43.0.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] avcodec/h2645_sei: validate Mastering Display Colour Volume SEI values
As we can read in ST 2086: Values outside the specified ranges of luminance and chromaticity values are not reserved by SMPTE, and can be used for purposes outside the scope of this standard. This is further acknowledged by ITU-T H.264 and ITU-T H.265. Which says that values out of range are unknown or unspecified or specified by other means not specified in this Specification. Signed-off-by: Kacper Michajłow --- libavcodec/h2645_sei.c | 53 +- 1 file changed, 37 insertions(+), 16 deletions(-) diff --git a/libavcodec/h2645_sei.c b/libavcodec/h2645_sei.c index cb6be0594b..f3ac8004a9 100644 --- a/libavcodec/h2645_sei.c +++ b/libavcodec/h2645_sei.c @@ -715,38 +715,59 @@ int ff_h2645_sei_to_frame(AVFrame *frame, H2645SEI *sei, if (!metadata) return AVERROR(ENOMEM); +metadata->has_luminance = 1; +metadata->has_primaries = 1; + for (i = 0; i < 3; i++) { const int j = mapping[i]; metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.display_primaries[j][0] >= 5 && + sei->mastering_display.display_primaries[j][0] <= 37000; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.display_primaries[j][1] >= 5 && + sei->mastering_display.display_primaries[j][1] <= 42000; } metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.white_point[0] >= 5 && + sei->mastering_display.white_point[0] <= 37000; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.white_point[0] >= 5 && + sei->mastering_display.white_point[0] <= 42000; metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; +metadata->has_luminance &= sei->mastering_display.max_luminance >= 5 && + sei->mastering_display.max_luminance <= 1; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; -metadata->has_luminance = 1; -metadata->has_primaries = 1; - -av_log(avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); -av_log(avctx, AV_LOG_DEBUG, - "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", - av_q2d(metadata->display_primaries[0][0]), - av_q2d(metadata->display_primaries[0][1]), - av_q2d(metadata->display_primaries[1][0]), - av_q2d(metadata->display_primaries[1][1]), - av_q2d(metadata->display_primaries[2][0]), - av_q2d(metadata->display_primaries[2][1]), - av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); -av_log(avctx, AV_LOG_DEBUG, - "min_luminance=%f, max_luminance=%f\n", - av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); +metadata->has_luminance &= sei->mastering_display.min_luminance >= 1 && + sei->mastering_display.min_luminance <= 5; + +if (metadata->has_luminance || metadata->has_primaries) +av_log(avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); +if (metadata->has_primaries) { +av_log(avctx, AV_LOG_DEBUG, + "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", + av_q2d(metadata->display_primaries[0][0]), + av_q2d(metadata->display_primaries[0][1]), + av_q2d(metadata->display_primaries[1][0]), + av_q2d(metadata->display_primaries[1][1]), + av_q2d(metadata->display_primaries[2][0]), + av_q2d(metadata->display_primaries[2][1]), + av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); +} +if (metadata->has_luminance) { +av_log(avctx, AV_LOG_DEBUG, + "min_luminance=%f, max_luminance=%f\n", + av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); +} } if (sei->content_light.present) { -- 2.42.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org
Re: [FFmpeg-devel] [PATCH] Add float user_rdiv[4] to allow user options to apply correctly
On Sat, 24 Feb 2024, Stone Chen wrote: On Sat, Feb 24, 2024 at 6:34 PM Marton Balint wrote: On Sat, 24 Feb 2024, Stone Chen wrote: On Sat, Feb 24, 2024 at 3:56 PM Marton Balint wrote: On Sat, 24 Feb 2024, Stone Chen wrote: Previously to support dynamic reconfigurations of the matrix string (e.g. 0m), the rdiv values would always be cleared to 0.f, causing the rdiv to be recalculated based on the new filter. This however had the side effect of always ignoring user specified rdiv values. Instead float user_rdiv[0] is added to ConvolutionContext which will store the user specified rdiv values. Then the original rdiv array will store either the user_rdiv or the automatically calculated 1/sum. This fixes trac #10294, #10867 Have you tested? Thanks, Marton Hi Marton, Yes I've tested that - the original behavior works (automatically calculate rdiv if user_rdiv = 0) - setting the rdiv value works It does not work for me even after applying your patch. Regards, Marton Hi Marton, Sorry about that, was it with the attached patch in the second email? I had botched my initial patch, I've re-attached the second to this email. Oh, sorry, I missed that the second patch is different. Use versions in email subjects, such as [PATCH v2] for new versions of the same patch. Looking good now, will apply. Thanks, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] mov: parse track-based udta name tags
On 2/24/2024 10:59 PM, Водянников А.В. via ffmpeg-devel wrote: Signed-off-by: Aleksoid --- libavformat/mov.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/libavformat/mov.c b/libavformat/mov.c index 1a1b104615..c7b6919433 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -301,6 +301,16 @@ static int mov_metadata_hmmt(MOVContext *c, AVIOContext *pb, unsigned len) return 0; } + Your mail client seems to have mangled the patch. +static void mov_set_metadata(MOVContext *c, const char *key, const char *str) +{ + if (c->trak_index >= 0) { + AVStream *st = c->fc->streams[c->fc->nb_streams-1]; + av_dict_set(&st->metadata, key, str, 0); + } else + av_dict_set(&c->fc->metadata, key, str, 0); +} + static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) { char tmp_key[AV_FOURCC_MAX_STRING_SIZE] = {0}; @@ -403,6 +413,7 @@ static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) case MKTAG(0xa9,'w','r','n'): key = "warning"; break; case MKTAG(0xa9,'w','r','t'): key = "composer"; break; case MKTAG(0xa9,'x','y','z'): key = "location"; break; + case MKTAG( 'n','a','m','e'): key = "title"; break; } retry: if (c->itunes_metadata && atom.size > 8) { @@ -530,10 +541,10 @@ retry: str[str_size] = 0; } c->fc->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; - av_dict_set(&c->fc->metadata, key, str, 0); + mov_set_metadata(c, key, str); if (*language && strcmp(language, "und")) { snprintf(key2, sizeof(key2), "%s-%s", key, language); - av_dict_set(&c->fc->metadata, key2, str, 0); + mov_set_metadata(c, key2, str); } if (!strcmp(key, "encoder")) { int major, minor, micro; ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v2] avcodec/h2645_sei: validate Mastering Display Colour Volume SEI values
As we can read in ST 2086: Values outside the specified ranges of luminance and chromaticity values are not reserved by SMPTE, and can be used for purposes outside the scope of this standard. This is further acknowledged by ITU-T H.264 and ITU-T H.265. Which says that values out of range are unknown or unspecified or specified by other means not specified in this Specification. Signed-off-by: Kacper Michajłow --- libavcodec/h2645_sei.c | 53 +- 1 file changed, 37 insertions(+), 16 deletions(-) diff --git a/libavcodec/h2645_sei.c b/libavcodec/h2645_sei.c index cb6be0594b..e3581e8136 100644 --- a/libavcodec/h2645_sei.c +++ b/libavcodec/h2645_sei.c @@ -715,38 +715,59 @@ int ff_h2645_sei_to_frame(AVFrame *frame, H2645SEI *sei, if (!metadata) return AVERROR(ENOMEM); +metadata->has_luminance = 1; +metadata->has_primaries = 1; + for (i = 0; i < 3; i++) { const int j = mapping[i]; metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.display_primaries[j][0] >= 5 && + sei->mastering_display.display_primaries[j][0] <= 37000; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.display_primaries[j][1] >= 5 && + sei->mastering_display.display_primaries[j][1] <= 42000; } metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.white_point[0] >= 5 && + sei->mastering_display.white_point[0] <= 37000; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; +metadata->has_primaries &= sei->mastering_display.white_point[1] >= 5 && + sei->mastering_display.white_point[1] <= 42000; metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; +metadata->has_luminance &= sei->mastering_display.max_luminance >= 5 && + sei->mastering_display.max_luminance <= 1; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; -metadata->has_luminance = 1; -metadata->has_primaries = 1; - -av_log(avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); -av_log(avctx, AV_LOG_DEBUG, - "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", - av_q2d(metadata->display_primaries[0][0]), - av_q2d(metadata->display_primaries[0][1]), - av_q2d(metadata->display_primaries[1][0]), - av_q2d(metadata->display_primaries[1][1]), - av_q2d(metadata->display_primaries[2][0]), - av_q2d(metadata->display_primaries[2][1]), - av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); -av_log(avctx, AV_LOG_DEBUG, - "min_luminance=%f, max_luminance=%f\n", - av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); +metadata->has_luminance &= sei->mastering_display.min_luminance >= 1 && + sei->mastering_display.min_luminance <= 5; + +if (metadata->has_luminance || metadata->has_primaries) +av_log(avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); +if (metadata->has_primaries) { +av_log(avctx, AV_LOG_DEBUG, + "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", + av_q2d(metadata->display_primaries[0][0]), + av_q2d(metadata->display_primaries[0][1]), + av_q2d(metadata->display_primaries[1][0]), + av_q2d(metadata->display_primaries[1][1]), + av_q2d(metadata->display_primaries[2][0]), + av_q2d(metadata->display_primaries[2][1]), + av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); +} +if (metadata->has_luminance) { +av_log(avctx, AV_LOG_DEBUG, + "min_luminance=%f, max_luminance=%f\n", + av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); +} } if (sei->content_light.present) { -- 2.42.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org
[FFmpeg-devel] [PATCH 1/2] Add support d3d11va Intel Hevc Rext decoder.
>From ed8fda62bbdbc62f7565891c935966c931d001ca Mon Sep 17 00:00:00 2001 From: Aleksoid Date: Thu, 22 Feb 2024 19:15:48 +1000 Subject: [PATCH 1/2] Add support d3d11va Intel Hevc Rext decoder. Signed-off-by: Aleksoid --- libavcodec/d3d12va_hevc.c | 2 +- libavcodec/dxva2.c| 68 +-- libavcodec/dxva2_hevc.c | 41 ++--- libavcodec/dxva2_internal.h | 38 +++- libavcodec/hevcdec.c | 16 + libavutil/hwcontext_d3d11va.c | 26 +++--- 6 files changed, 178 insertions(+), 13 deletions(-) diff --git a/libavcodec/d3d12va_hevc.c b/libavcodec/d3d12va_hevc.c index a4964a05c6..0912e01b7d 100644 --- a/libavcodec/d3d12va_hevc.c +++ b/libavcodec/d3d12va_hevc.c @@ -62,7 +62,7 @@ static int d3d12va_hevc_start_frame(AVCodecContext *avctx, av_unused const uint8 ctx->used_mask = 0; -ff_dxva2_hevc_fill_picture_parameters(avctx, (AVDXVAContext *)ctx, &ctx_pic->pp); +ff_dxva2_hevc_fill_picture_parameters(avctx, (AVDXVAContext *)ctx, (DXVA_PicParams_HEVC_Rext*)&ctx_pic->pp); ff_dxva2_hevc_fill_scaling_lists(avctx, (AVDXVAContext *)ctx, &ctx_pic->qm); diff --git a/libavcodec/dxva2.c b/libavcodec/dxva2.c index 59025633f7..a611989911 100644 --- a/libavcodec/dxva2.c +++ b/libavcodec/dxva2.c @@ -50,6 +50,13 @@ DEFINE_GUID(ff_DXVA2_NoEncrypt, 0x1b81beD0, 0xa0c7,0x11d3,0xb9,0x84,0x0 DEFINE_GUID(ff_GUID_NULL,0x, 0x,0x,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00); DEFINE_GUID(ff_IID_IDirectXVideoDecoderService, 0xfc51a551,0xd5e7,0x11d9,0xaf,0x55,0x00,0x05,0x4e,0x43,0xff,0x02); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main12_Intel, 0x8FF8A3AA, 0xC456, 0x4132, 0xB6, 0xEF, 0x69, 0xD9, 0xDD, 0x72, 0x57, 0x1D); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main422_10_Intel, 0xE484DCB8, 0xCAC9, 0x4859, 0x99, 0xF5, 0x5C, 0x0D, 0x45, 0x06, 0x90, 0x89); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main422_12_Intel, 0xC23DD857, 0x874B, 0x423C, 0xB6, 0xE0, 0x82, 0xCE, 0xAA, 0x9B, 0x11, 0x8A); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_Intel,0x41A5AF96, 0xE415, 0x4B0C, 0x9D, 0x03, 0x90, 0x78, 0x58, 0xE2, 0x3E, 0x78); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_10_Intel, 0x6A6A81BA, 0x912A, 0x485D, 0xB5, 0x7F, 0xCC, 0xD2, 0xD3, 0x7B, 0x8D, 0x94); +DEFINE_GUID(ff_DXVA2_HEVC_VLD_Main444_12_Intel, 0x5B08E35D, 0x0C66, 0x4C51, 0xA6, 0xF1, 0x89, 0xD0, 0x0C, 0xB2, 0xC1, 0x97); + typedef struct dxva_mode { const GUID *guid; enum AVCodecID codec; @@ -75,6 +82,8 @@ static const int prof_vp9_profile2[] = {AV_PROFILE_VP9_2, AV_PROFILE_UNKNOWN}; static const int prof_av1_profile0[] = {AV_PROFILE_AV1_MAIN, AV_PROFILE_UNKNOWN}; +static const int prof_hevc_rext[]= {AV_PROFILE_HEVC_REXT, +AV_PROFILE_UNKNOWN}; static const dxva_mode dxva_modes[] = { /* MPEG-2 */ @@ -104,6 +113,14 @@ static const dxva_mode dxva_modes[] = { /* AV1 */ { &ff_DXVA2_ModeAV1_VLD_Profile0, AV_CODEC_ID_AV1, prof_av1_profile0 }, +/* HEVC/H.265 Rext */ +{ &ff_DXVA2_HEVC_VLD_Main12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main422_10_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main422_12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_Intel,AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_10_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, +{ &ff_DXVA2_HEVC_VLD_Main444_12_Intel, AV_CODEC_ID_HEVC, prof_hevc_rext }, + { NULL, 0 }, }; @@ -301,6 +318,14 @@ static int dxva_get_decoder_guid(AVCodecContext *avctx, void *service, void *sur if (IsEqualGUID(decoder_guid, &ff_DXVADDI_Intel_ModeH264_E)) sctx->workaround |= FF_DXVA2_WORKAROUND_INTEL_CLEARVIDEO; +av_log(avctx, AV_LOG_VERBOSE, + "Used guid : {%8.8x-%4.4x-%4.4x-%2.2x%2.2x-%2.2x%2.2x%2.2x%2.2x%2.2x%2.2x}\n", + (unsigned)decoder_guid->Data1, decoder_guid->Data2, decoder_guid->Data3, + decoder_guid->Data4[0], decoder_guid->Data4[1], + decoder_guid->Data4[2], decoder_guid->Data4[3], + decoder_guid->Data4[4], decoder_guid->Data4[5], + decoder_guid->Data4[6], decoder_guid->Data4[7]); + return 0; } @@ -458,6 +483,13 @@ static DXGI_FORMAT d3d11va_map_sw_to_hw_format(enum AVPixelFormat pix_fmt) case AV_PIX_FMT_NV12: return DXGI_FORMAT_NV12; case AV_PIX_FMT_P010: return DXGI_FORMAT_P010; case AV_PIX_FMT_YUV420P:return DXGI_FORMAT_420_OPAQUE; +case AV_PIX_FMT_P016: return DXGI_FORMAT_P016; +case AV_PIX_FMT_YUYV422:return DXGI_FORMAT_YUY2; +case AV_PIX_FMT_Y210: return DXGI_FORMAT_Y210; +case AV_PIX_FMT_Y212: return DXGI_FORMAT_Y216; +case AV_PIX_FMT_VUYX: return DXGI_FORMAT_AYUV; +case AV_PIX_FMT_XV30: return DXGI_FORM
[FFmpeg-devel] [PATCH 2/2] mov: parse track-based udta name tags
>From 1833111ec9fe0350e9cf206bb33ca573b6b8c4b5 Mon Sep 17 00:00:00 2001 From: Aleksoid Date: Sun, 25 Feb 2024 11:59:03 +1000 Subject: [PATCH 2/2] mov: parse track-based udta name tags Signed-off-by: Aleksoid --- libavformat/mov.c | 15 +-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/libavformat/mov.c b/libavformat/mov.c index 1a1b104615..c7b6919433 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -301,6 +301,16 @@ static int mov_metadata_hmmt(MOVContext *c, AVIOContext *pb, unsigned len) return 0; } + +static void mov_set_metadata(MOVContext *c, const char *key, const char *str) +{ +if (c->trak_index >= 0) { +AVStream *st = c->fc->streams[c->fc->nb_streams-1]; +av_dict_set(&st->metadata, key, str, 0); +} else +av_dict_set(&c->fc->metadata, key, str, 0); +} + static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) { char tmp_key[AV_FOURCC_MAX_STRING_SIZE] = {0}; @@ -403,6 +413,7 @@ static int mov_read_udta_string(MOVContext *c, AVIOContext *pb, MOVAtom atom) case MKTAG(0xa9,'w','r','n'): key = "warning"; break; case MKTAG(0xa9,'w','r','t'): key = "composer"; break; case MKTAG(0xa9,'x','y','z'): key = "location"; break; +case MKTAG( 'n','a','m','e'): key = "title"; break; } retry: if (c->itunes_metadata && atom.size > 8) { @@ -530,10 +541,10 @@ retry: str[str_size] = 0; } c->fc->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED; -av_dict_set(&c->fc->metadata, key, str, 0); +mov_set_metadata(c, key, str); if (*language && strcmp(language, "und")) { snprintf(key2, sizeof(key2), "%s-%s", key, language); -av_dict_set(&c->fc->metadata, key2, str, 0); +mov_set_metadata(c, key2, str); } if (!strcmp(key, "encoder")) { int major, minor, micro; -- 2.43.0.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 11/11] avcodec/vvcdec: add Intra Block Copy decoder
On Sat, Feb 24, 2024 at 9:20 PM Ronald S. Bultje wrote: > Hi, > > On Thu, Feb 22, 2024 at 2:15 AM Nuo Mi wrote: > >> +static void ibc_fill_vir_buf(const VVCLocalContext *lc, const CodingUnit >> *cu) >> [..] >> > +av_image_copy_plane(ibc_buf, ibc_stride, src, src_stride, >> cu->cb_width >> hs << ps , cu->cb_height >> vs); >> > > I'm admittedly not super-familiar with VVC, but I wonder why we need the > double buffering here (from ref_pos in pic to ibc_buf, and then back from > ibc_buf back to cur block in pic)? In AV1, this is done with just a single > copy. Why is this done this way? > Hi Ronald, Two major differences between AV1 and VVC are: 1. AV1 disables all in-loop filters for IBC, while VVC does not. 2. AV1 can refer to any reconstructed super blocks, except the delayed super block, whereas VVC can only refer to the left and current CTU. Therefore, in VVC, we need to allocate memory for each line to save pixels before applying filters. VVC refers to this memory as IbcVirBuf, which is a 2D cyclic buffer. Every new reconstructed Coding Block will be copied to this buffer, and we can only copy pixels from this buffer. Best Regards. > > Ronald > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v3] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket
On 24/02/2024 13:26, Tomas Härdin wrote: [...] It should be possible to have ffmpeg set up the necessary plumbing for this. But is it how it works elsewhere in FFmpeg? Would such complex and deep modifications be accepted by others? Good question. I would propose something like the following: 1) detect the use of SEPARATE_FIELDS and set a flag in AVStream As in practice and in that case (2 jp2k codestreams per AVPacket) it is only a tip (because we can autodetect and there are many buggy files in the wild) for the jpeg2000 decoder, I was planning to add that later in a separate patch, but attached is a version with the flag. 2) allocate AVFrame for the size of the resulting *frame* So keeping what is already there. 3a) if the codec is inherently interlaced, call the decoder once 3b) if the codec is not inherently interlaced, call the decoder twice, with appropriate stride, and keep track of the number of bytes decoded so far so we know what offset to start the second decode from The place I see for that is in decode_simple_internal(). But it is a very hot place I don't like to modify, and it seems to me some extra code for 99.% (or even more 9s) of files which don't need such feature, with more risk to forget this specific feature during a future dev e.g. not obvious to change also in ff_thread_decode_frame when touching this part. I also needed to add a dedicated AVStream field for saying that the decoder is able to manage this functionality (and is needed there). What is the added value to call the decoder twice from decode.c rather than recursive call (or a master function in the decoder calling the current function twice, if preferred) inside the decoder only? As far as I understand, it would not help for other formats (only the signaling propagation in AVStream helps and it is done by another AVStream field) and I personally highly prefer that such feature is as much as possible in a single place in each decoder rather than pieces a bit everywhere, and each decoder needs to be upgraded anyway. The codecs for which 3b) applies include at least: * jpeg2000 Our use case. * ffv1 FFV1 has its own flags internally for interlaced content (interleaved method only) and I expect no work for separated fields. the MXF/FFV1 spec does not plan separated fields for FFV1, and there is no byte in the essence label for that. * rawvideo * tiff I didn't find specifications for the essence label UL corresponding and I have no file for that, as far as I understand it is highly theoretical but if it appears would be only a matter of mapping the MXF signaling to the new AVStream field and supporting the feature in the decoders (even if we implement the idea of calling the decoder twice, the decoder needs to be expanded for this feature). So IMO no dev to do there too for the moment. Jérôme From f4311b718012a92590ce6168355ec118e02052a8 Mon Sep 17 00:00:00 2001 From: Jerome Martinez Date: Tue, 20 Feb 2024 16:04:11 +0100 Subject: [PATCH] avcodec/jpeg2000dec: support of 2 fields in 1 AVPacket --- libavcodec/avcodec.h | 14 + libavcodec/codec_par.c | 3 ++ libavcodec/codec_par.h | 5 libavcodec/decode.c| 3 ++ libavcodec/defs.h | 7 + libavcodec/jpeg2000dec.c | 73 +++--- libavcodec/jpeg2000dec.h | 6 libavcodec/pthread_frame.c | 3 ++ libavformat/mxfdec.c | 14 + 9 files changed, 118 insertions(+), 10 deletions(-) diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h index 7fb44e28f4..38d63adc0f 100644 --- a/libavcodec/avcodec.h +++ b/libavcodec/avcodec.h @@ -2116,6 +2116,20 @@ typedef struct AVCodecContext { * an error. */ int64_t frame_num; + +/** + * Video only. The way separate fields are wrapped in the container + * - decoding: tip from the demuxer + * - encoding: not (yet) used + */ +enum AVFrameWrapping frame_wrapping; + +/** + * Video only. Indicate if running the decoder twice for a single AVFrame is supported + * - decoding: set by the decoder + * - encoding: not used + */ +intframe_wrapping_field_2_supported; } AVCodecContext; /** diff --git a/libavcodec/codec_par.c b/libavcodec/codec_par.c index abaac63841..3f26f9d4d6 100644 --- a/libavcodec/codec_par.c +++ b/libavcodec/codec_par.c @@ -51,6 +51,7 @@ static void codec_parameters_reset(AVCodecParameters *par) par->framerate = (AVRational){ 0, 1 }; par->profile = AV_PROFILE_UNKNOWN; par->level = AV_LEVEL_UNKNOWN; +par->frame_wrapping = AV_WRAPPING_UNKNOWN; } AVCodecParameters *avcodec_parameters_alloc(void) @@ -165,6 +166,7 @@ int avcodec_parameters_from_context(AVCodecParameters *par, par->sample_aspect_ratio = codec->sample_aspect_ratio; par->video_delay = codec->has_b_frames;
Re: [FFmpeg-devel] [RFC] fateserver
I'm willing to help. I have experience with Node.JS and Express (the rewrite). Thank you, On Sat, Feb 24, 2024 at 11:49 AM Michael Niedermayer wrote: > Hi all > > Both fateserver and the fateserver rewrite lack a mainteiner > > The original: > https://fate.ffmpeg.org/ > perl code here: > https://git.ffmpeg.org/fateserver > > The rewrite (from Timothy Gu unmaintained since 2016) > https://fatebeta.ffmpeg.org/ > https://github.com/TimothyGu/fateserver-node > > Really only one of these need to be maintained, but one needs to be > > If you know someone who may be interersted in this, please tell me > > The situation ATM, where when someone reports an issue, theres noone > responsible, noone taking care of it even with simple issues. Is really > not good > > thx > > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > Complexity theory is the science of finding the exact solution to an > approximation. Benchmarking OTOH is finding an approximation of the exact > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".