[FFmpeg-devel] [PATCH] avformat/wavenc: allow WAVEFORMATEXTENSIBLE to be suppressed
While WAVEFORMATEX sort of mandates WAVEFORMATEXTENSIBLE to be used for audio with sample size other than 8 or 16, PCMWAVEFORMAT does not practically have any limitation in supporting higher sample size that is a multiple of 8 (or sample rate higher than 48000). In terms of sample size, the only real reason that the extension exists is that it allows both an actual sample size and a multiple-of-8 container size to be stored. However, such facility is of no use when the actual sample size is a multiple of 8, let alone that muxer never made use of it (since it never actually supported sample size like 20 anyway; at least not in the way it is supposed to.) Although WAVEFORMATEXTENSIBLE has been around for more than two decades, apparently programs and libraries that do not support the format still exist. Therefore, adding an option that allows the WAVEFORMATEXTENSIBLE "extensions" to be suppressed and the format tag field to be set to WAVE_FORMAT_PCM (0x0001) for audio with sample size higher than 16 (or sample rate higher than 48000). Signed-off-by: Tom Yan --- libavformat/riff.h| 5 + libavformat/riffenc.c | 4 ++-- libavformat/wavenc.c | 5 - 3 files changed, 11 insertions(+), 3 deletions(-) diff --git a/libavformat/riff.h b/libavformat/riff.h index 85d6786663..5794857f53 100644 --- a/libavformat/riff.h +++ b/libavformat/riff.h @@ -57,6 +57,11 @@ void ff_put_bmp_header(AVIOContext *pb, AVCodecParameters *par, int for_asf, int */ #define FF_PUT_WAV_HEADER_SKIP_CHANNELMASK 0x0002 +/** + * Tell ff_put_wav_header() not to write WAVEFORMATEXTENSIBLE extensions if possible. + */ +#define FF_PUT_WAV_HEADER_FORCE_PCMWAVEFORMAT 0x0004 + /** * Write WAVEFORMAT header structure. * diff --git a/libavformat/riffenc.c b/libavformat/riffenc.c index ffccfa3d48..4dc8ca6e0f 100644 --- a/libavformat/riffenc.c +++ b/libavformat/riffenc.c @@ -80,9 +80,9 @@ int ff_put_wav_header(AVFormatContext *s, AVIOContext *pb, waveformatextensible = (par->channels > 2 && par->channel_layout) || par->channels == 1 && par->channel_layout && par->channel_layout != AV_CH_LAYOUT_MONO || par->channels == 2 && par->channel_layout && par->channel_layout != AV_CH_LAYOUT_STEREO || - par->sample_rate > 48000 || par->codec_id == AV_CODEC_ID_EAC3 || - av_get_bits_per_sample(par->codec_id) > 16; + ((par->sample_rate > 48000 || av_get_bits_per_sample(par->codec_id) > 16) && +!(flags & FF_PUT_WAV_HEADER_FORCE_PCMWAVEFORMAT)); if (waveformatextensible) avio_wl16(pb, 0xfffe); diff --git a/libavformat/wavenc.c b/libavformat/wavenc.c index 2317700be1..bd41d6eeb3 100644 --- a/libavformat/wavenc.c +++ b/libavformat/wavenc.c @@ -83,6 +83,7 @@ typedef struct WAVMuxContext { int peak_block_pos; int peak_ppv; int peak_bps; +int extensible; } WAVMuxContext; #if CONFIG_WAV_MUXER @@ -324,9 +325,10 @@ static int wav_write_header(AVFormatContext *s) } if (wav->write_peak != PEAK_ONLY) { +int flags = !wav->extensible ? FF_PUT_WAV_HEADER_FORCE_PCMWAVEFORMAT : 0; /* format header */ fmt = ff_start_tag(pb, "fmt "); -if (ff_put_wav_header(s, pb, s->streams[0]->codecpar, 0) < 0) { +if (ff_put_wav_header(s, pb, s->streams[0]->codecpar, flags) < 0) { av_log(s, AV_LOG_ERROR, "Codec %s not supported in WAVE format\n", avcodec_get_name(s->streams[0]->codecpar->codec_id)); return AVERROR(ENOSYS); @@ -494,6 +496,7 @@ static const AVOption options[] = { { "peak_block_size", "Number of audio samples used to generate each peak frame.", OFFSET(peak_block_size), AV_OPT_TYPE_INT, { .i64 = 256 }, 0, 65536, ENC }, { "peak_format", "The format of the peak envelope data (1: uint8, 2: uint16).", OFFSET(peak_format), AV_OPT_TYPE_INT, { .i64 = PEAK_FORMAT_UINT16 }, PEAK_FORMAT_UINT8, PEAK_FORMAT_UINT16, ENC }, { "peak_ppv","Number of peak points per peak value (1 or 2).", OFFSET(peak_ppv), AV_OPT_TYPE_INT, { .i64 = 2 }, 1, 2, ENC }, +{ "extensible", "Write WAVEFORMATEXTENSIBLE extensions.", OFFSET(extensible), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, ENC }, { NULL }, }; -- 2.34.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avcodec/v4l2_m2m_dec: dequeue frame if input isn't ready
On 12/27/21 00:17, Andriy Gelman wrote: > On Tue, 14. Dec 02:12, Cameron Gutman wrote: >> The V4L2M2M API operates asynchronously, so multiple packets can >> be enqueued before getting a batch of frames back. Since it was >> only possible to receive a frame by submitting another packet, >> there wasn't a way to drain those excess output frames from when >> avcodec_receive_frame() returned AVERROR(EAGAIN). >> > >> In my testing, this change reduced decode latency of a real-time >> 60 FPS H.264 stream by approximately 10x (200ms -> 20ms) on a >> Raspberry Pi 4. > > I was doing some more tests today, but didn't have any luck dequeuing a frame > if ff_decode_get_packet() returned EAGAIN. Hmm, maybe there is something different about your test harness? I'm receiving 720p 60 FPS H.264 ES in real-time from the network. For each H.264 encoded frame I receive off the network, my basic approach is like this (simplified for brevity): avcodec_send_packet(&pkt); do { frame = av_frame_alloc(); if ((err = avcodec_receive_frame(frame)) == 0) { render_frame(frame); } } while (err == 0); I'll usually get EAGAIN immediately for the first few frames I submit (so no output frame yet), but then I'll get a batch of output frames back after the first completed decode. That drains the excess latency from the pipeline to avoid always being behind. For cases where we want to prioritize latency over throughput, I've had success with this approach too: avcodec_send_packet(&pkt); while (avcodec_receive_frame(frame) == AVERROR(EAGAIN)) { msleep(1); } render_frame(frame); In this case, we can retry avcodec_receive_frame() until we get the frame back that we just submitted for decoding. The patch here enables both of these use-cases by allowing V4L2M2M to retry getting a decoded frame without new input data. Both of these also work with MMAL after the recent decoupled dataflow patch. > Could you share the dataset? > It is 720p60.h264 from here: https://onedrive.live.com/?authkey=%21ALoKfcPfFeKyhzs&id=C15BF9770619F56%21165617&cid=0C15BF9770619F56 > Thanks, > Thank you ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 2/2] avformat/movenc: add missing timestamp check when peek from interleave queues
--- libavformat/movenc.c | 80 +--- 1 file changed, 45 insertions(+), 35 deletions(-) diff --git a/libavformat/movenc.c b/libavformat/movenc.c index dd92c0f26d..3f35d2939f 100644 --- a/libavformat/movenc.c +++ b/libavformat/movenc.c @@ -5403,6 +5403,45 @@ static int mov_write_squashed_packets(AVFormatContext *s) return 0; } +static int check_pkt_time(AVFormatContext *s, int stream_index, + int64_t *pkt_pts, int64_t *pkt_dts, + int64_t *pkt_duration) { +MOVMuxContext *mov = s->priv_data; +MOVTrack *trk = &mov->tracks[stream_index]; +int64_t ref; +uint64_t duration; + +if (trk->entry) { +ref = trk->cluster[trk->entry - 1].dts; +} else if ( trk->start_dts != AV_NOPTS_VALUE + && !trk->frag_discont) { +ref = trk->start_dts + trk->track_duration; +} else +ref = *pkt_dts; // Skip tests for the first packet + +if (trk->dts_shift != AV_NOPTS_VALUE) { +/* With negative CTS offsets we have set an offset to the DTS, + * reverse this for the check. */ +ref -= trk->dts_shift; +} + +duration = *pkt_dts - ref; +if (*pkt_dts < ref || duration >= INT_MAX) { +av_log(s, AV_LOG_ERROR, "Application provided duration: %"PRId64" / timestamp: %"PRId64" is out of range for mov/mp4 format\n", + duration, *pkt_dts +); + +*pkt_dts = ref + 1; +*pkt_pts = AV_NOPTS_VALUE; +} + +if (*pkt_duration < 0 || *pkt_duration > INT_MAX) { +av_log(s, AV_LOG_ERROR, "Application provided duration: %"PRId64" is invalid\n", *pkt_duration); +return AVERROR(EINVAL); +} +return 0; +} + static int mov_flush_fragment(AVFormatContext *s, int force) { MOVMuxContext *mov = s->priv_data; @@ -5429,12 +5468,15 @@ static int mov_flush_fragment(AVFormatContext *s, int force) if (!track->end_reliable) { const AVPacket *pkt = ff_interleaved_peek(s, i); if (pkt) { -int64_t offset, dts, pts; +int64_t offset, dts, pts, duration; ff_get_muxer_ts_offset(s, i, &offset); pts = pkt->pts + offset; dts = pkt->dts + offset; +duration = pkt->duration; if (track->dts_shift != AV_NOPTS_VALUE) dts += track->dts_shift; +if (check_pkt_time(s, pkt->stream_index, &pts, &dts, &duration)) +continue; track->track_duration = dts - track->start_dts; if (pts != AV_NOPTS_VALUE) track->end_pts = pts; @@ -5627,40 +5669,8 @@ static int mov_auto_flush_fragment(AVFormatContext *s, int force) static int check_pkt(AVFormatContext *s, AVPacket *pkt) { -MOVMuxContext *mov = s->priv_data; -MOVTrack *trk = &mov->tracks[pkt->stream_index]; -int64_t ref; -uint64_t duration; - -if (trk->entry) { -ref = trk->cluster[trk->entry - 1].dts; -} else if ( trk->start_dts != AV_NOPTS_VALUE - && !trk->frag_discont) { -ref = trk->start_dts + trk->track_duration; -} else -ref = pkt->dts; // Skip tests for the first packet - -if (trk->dts_shift != AV_NOPTS_VALUE) { -/* With negative CTS offsets we have set an offset to the DTS, - * reverse this for the check. */ -ref -= trk->dts_shift; -} - -duration = pkt->dts - ref; -if (pkt->dts < ref || duration >= INT_MAX) { -av_log(s, AV_LOG_ERROR, "Application provided duration: %"PRId64" / timestamp: %"PRId64" is out of range for mov/mp4 format\n", -duration, pkt->dts -); - -pkt->dts = ref + 1; -pkt->pts = AV_NOPTS_VALUE; -} - -if (pkt->duration < 0 || pkt->duration > INT_MAX) { -av_log(s, AV_LOG_ERROR, "Application provided duration: %"PRId64" is invalid\n", pkt->duration); -return AVERROR(EINVAL); -} -return 0; +return check_pkt_time(s, pkt->stream_index, &pkt->pts, &pkt->dts, + &pkt->duration); } int ff_mov_write_packet(AVFormatContext *s, AVPacket *pkt) -- 2.31.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] lavf/tls_mbedtls: fix handling of tls_verify=0
ping. I'll look at getting this pushed in a few days if there are no objections. If ca_file was set, setting tls_verify=0 would not actually disable verification. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] GitHub Integration
On 27/12/21 11:41, lance.lmw...@gmail.com wrote: On Sun, Dec 26, 2021 at 04:37:54PM -0500, Ronald S. Bultje wrote: Hi, On Sun, Dec 26, 2021 at 3:21 PM Soft Works wrote: I'm not sure. My interpretation of Lance' and Steven's comments would be that they'd prefer to stick to the ML. No, it's not strictly related to that - they want something that is CLI accessible. Gitlab has this here: https://glab.readthedocs.io/en/latest/ and github has this here: https://github.com/cli/cli - the next question is whether the gitlab/hub hosts are blocked by a firewall (no idea) and/or whether the instances are self-hosted (github: no, gitlab-videolan: yes). Yes, self-hosted is more preferable, I recall github has blocked devleopers in some country by US trade controls. Who knows what's the rules will be changed someday as it's controlled by company. Something that doesn't require another account would be nice, which is why I like mailing lists. Although in this case a move to GitHub wouldn't affect me personally (I use it a lot) - if I didn't, I would seriously reconsider if the mental overhead of yet another account (and all the baggage that comes with it - passwords, MFA, data breaches, etc.) is worth contributing to the project. Ideally, something self-hosted like Gitea or GitLab with a federated approach (i.e. ForgeFed) would be fantastic for a case such as this. I'm not sure about GitLab, but I know Gitea is working towards this. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] GitHub Integration
Not advocating for github (per this thread it's out of scope), but I think using a self-hosted gitlab with an option to auth using github account would help in reducing the "account overhead" at least for those who use github. пн, 27 дек. 2021 г., 17:59 Zane van Iperen : > > > On 27/12/21 11:41, lance.lmw...@gmail.com wrote: > > On Sun, Dec 26, 2021 at 04:37:54PM -0500, Ronald S. Bultje wrote: > >> Hi, > >> > >> On Sun, Dec 26, 2021 at 3:21 PM Soft Works > wrote: > >> > >>> I'm not sure. My interpretation of Lance' and Steven's comments would > >>> be that they'd prefer to stick to the ML. > >>> > >> > >> No, it's not strictly related to that - they want something that is CLI > >> accessible. Gitlab has this here: > https://glab.readthedocs.io/en/latest/ > >> and github has this here: https://github.com/cli/cli - the next > question is > >> whether the gitlab/hub hosts are blocked by a firewall (no idea) and/or > >> whether the instances are self-hosted (github: no, gitlab-videolan: > yes). > > > > Yes, self-hosted is more preferable, I recall github has blocked > devleopers > > in some country by US trade controls. Who knows what's the rules will be > > changed someday as it's controlled by company. > > > > Something that doesn't require another account would be nice, which is why > I like mailing lists. Although in this case a move to GitHub wouldn't > affect me > personally (I use it a lot) - if I didn't, I would seriously reconsider if > the mental > overhead of yet another account (and all the baggage that comes with it - > passwords, MFA, > data breaches, etc.) is worth contributing to the project. > > Ideally, something self-hosted like Gitea or GitLab with a federated > approach (i.e. ForgeFed) > would be fantastic for a case such as this. I'm not sure about GitLab, but > I know Gitea is > working towards this. > > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/3] libavcodec/vaapi_encode: Change the way to call async to increase performance
On 27/10/2021 09:57, Wenbin Chen wrote: Fix: #7706. After commit 5fdcf85bbffe7451c2, vaapi encoder's performance decrease. The reason is that vaRenderPicture() and vaSyncSurface() are called at the same time (vaRenderPicture() always followed by a vaSyncSurface()). When we encode stream with B frames, we need buffer to reorder frames, so we can send serveral frames to HW at once to increase performance. Now I changed them to be called in a asynchronous way, which will make better use of hardware. 1080p transcoding increases about 17% fps on my environment. Signed-off-by: Wenbin Chen --- libavcodec/vaapi_encode.c | 41 --- libavcodec/vaapi_encode.h | 3 +++ 2 files changed, 33 insertions(+), 11 deletions(-) The API does not allow this behaviour. For some bizarre reason (I think a badly-written example combined with the Intel driver being synchronous in vaEndPicture() for a long time), the sync to a surface is to the /input/ surface of an encode rather than the output surface. That means you can't have multiple encodes outstanding on the same surface and expect to sync usefully, because the only argument to vaSyncSurface() is the surface to sync to without anything about the associated context. Therefore trying to make it asynchronous like this falls down when input surfaces might appear multiple times, or might be used in the input of multiple encoders, because you can't tell whether your sync means the thing you actually wanted to finish has finished. (The commit you point to above as having decreased performance fixed this bug, since it became much more visible with decoupled send/receive.) So: put this change after the switch to syncing on output buffers (since that operation does make sense for this), and leave the existing behaviour for cases where you have to sync on the input surface. - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl
On 16/11/2021 08:16, Wenbin Chen wrote: From: nyanmisaka mfxHDLPair was added to qsv, so modify qsv->opencl map function as well. Now the following commandline works: ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \ -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device ocl \ -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \ -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \ hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \ -c:v h264_qsv output.264 Signed-off-by: nyanmisaka Signed-off-by: Wenbin Chen --- libavutil/hwcontext_opencl.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c index 26a3a24593..4b6e74ff6f 100644 --- a/libavutil/hwcontext_opencl.c +++ b/libavutil/hwcontext_opencl.c @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst, #if CONFIG_LIBMFX if (src->format == AV_PIX_FMT_QSV) { mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3]; -va_surface = *(VASurfaceID*)mfx_surface->Data.MemId; +mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId; +va_surface = *(VASurfaceID*)pair->first; } else #endif if (src->format == AV_PIX_FMT_VAAPI) { Since these frames can be user-supplied, this implies that the user-facing API/ABI for AV_PIX_FMT_QSV has changed. It looks like this was broken by using HDLPairs when D3D11 was introduced, which silently changed the existing API for DXVA2 and VAAPI as well. Could someone related to that please document it properly (clearly not all possible valid mfxFrameSurface1s are allowed), and note in APIchanges when the API change happened? - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [FFmpeg-cvslog] libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan frame
On 10/12/2021 16:05, Wenbin Chen wrote: ffmpeg | branch: master | Wenbin Chen | Tue Dec 7 17:05:50 2021 +0800| [f3c9847c2754b7a43eb721c95e356a53085c2491] | committer: Lynne libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan frame Vulkan will map nv12 to R8 and GR88, so add this map to vaapi to support vulkan frame. Signed-off-by: Wenbin Chen http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=f3c9847c2754b7a43eb721c95e356a53085c2491 --- libavutil/hwcontext_vaapi.c | 1 + 1 file changed, 1 insertion(+) diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c index 75acc851d6..994b744e4d 100644 --- a/libavutil/hwcontext_vaapi.c +++ b/libavutil/hwcontext_vaapi.c @@ -992,6 +992,7 @@ static const struct { } vaapi_drm_format_map[] = { #ifdef DRM_FORMAT_R8 DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_RG88), +DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_GR88), #endif DRM_MAP(NV12, 1, DRM_FORMAT_NV12), #if defined(VA_FOURCC_P010) && defined(DRM_FORMAT_R16) This looks very shady. Shouldn't one or the other of these be NV21, with the second plane VU rather than UV? - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] http: make caching of redirect url optional
Hi, On Mon, Dec 27, 2021 at 4:38 AM Eran Kornblau wrote: > > -Original Message- > > From: ffmpeg-devel On Behalf Of > Ronald S. Bultje > > Sent: Sunday, 26 December 2021 16:07 > > To: FFmpeg development discussions and patches > > Subject: Re: [FFmpeg-devel] [PATCH] http: make caching of redirect url > optional > > > > Hi, > > > > (I was asked to respond since I'm listed as HTTP maintainer, not sure I > should be since I'm mostly working on video codecs nowadays.) > > > > On Tue, Nov 2, 2021 at 9:00 AM Eran Kornblau > > wrote: > > > > > The motivation for this feature is S3 signatures – currently we have a > > > problem where S3 signatures cannot be created with an expiration of > > > more than 12H. In some cases, a transcoding task may execute for more > > > than that. > > > If we use a pre-signed S3 URL, and ffmpeg disconnects/seeks after the > > > expiration of the URL, it will fail. > > > > > > The solution we are planning is to have some local server on the > > > machine running ffmpeg that will generate an S3-signature, and > > > redirect to the full pre-signed URL. For this to work, I need to > > > disable the caching of redirects, and have ffmpeg always start from > > > the initial URL. > > > The nice thing about this solution is that the video data is pulled > > > directly from S3 – in other words, the local server doesn’t hold any > > > real load, it just builds the signature and returns a redirect. > > > > > > > Uhm... This is a really weird solution, but it does look right. > > > > Generally speaking, we're typically concerned about the default being > the right behaviour. I would say that (maybe after some time, at the next > ABI break or so), 0 should be the default, not 1. This is the same as what > Marton/Hendrik said also, I think, so consider this consensus. I would just > do that with the appropriate ABI macros so the default behaviour changes at > the next bump. > > > Thank you, Ronald! > > I attached a new patch with the change you requested, I hope I understood > your intention correctly... > Yes, LGTM. I'll give it a few days to let Marton/Hendrik respond before I push. Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/mov: correct 0 valued entries in stts
On Mon, Dec 27, 2021 at 11:27:10AM +0530, Gyan Doshi wrote: > As per ISO 14496-12, sample duration of 0 is invalid except for > the last entry. > > In addition, also catch 0 value for sample count. > --- > libavformat/mov.c | 12 > 1 file changed, 12 insertions(+) > > diff --git a/libavformat/mov.c b/libavformat/mov.c > index 2aed6e80ef..fb7406cdd6 100644 > --- a/libavformat/mov.c > +++ b/libavformat/mov.c > @@ -2968,6 +2968,18 @@ static int mov_read_stts(MOVContext *c, AVIOContext > *pb, MOVAtom atom) > av_log(c->fc, AV_LOG_TRACE, "sample_count=%d, sample_duration=%d\n", > sample_count, sample_duration); > > +if (!sample_count) { > +av_log(c->fc, AV_LOG_WARNING, "invalid sample count of 0 in stts for > st %d at entry %u; changing to 1.\n", > + c->fc->nb_streams-1, i); > +sc->stts_data[i].count = sample_count = 1; > +} > + > +if (!sample_duration && i != entries-1) { > +av_log(c->fc, AV_LOG_WARNING, "invalid sample delta of 0 in stts for > st %d at entry %u; changing to 1.\n", > + c->fc->nb_streams-1, i); > +sc->stts_data[i].duration = sample_duration = 1; > +} > + > duration+=(int64_t)sample_duration*(uint64_t)sample_count; > total_sample_count+=sample_count; This does not produce the same output tickets/2096/m.f4v videos/stretch.mov (2344 matches for "invalid" after this patch) tickets/976/CodecCopyFailing.mp4 But there are many more, some maybe even generated by FFmpeg Taking a step back, the problem started with 203b0e3561dea1ec459be226d805abe73e7535e5 which broke a real world file which was outside the specification you then suggested a fix which crashed with some fuzzed files which where outside the specification and now this fix on top which changes real world files which are outside the specification I think, maybe you should consider the "outside the specification" more. The code above directly and intentionally changes values. So as a reviewer i have to ask the obvious, is that change a bugfix or a bug ? The change refers to the specification but the specification will not help me when it to comes to how to handle all the wierd and wonderful files the exist out there ... thx PS: also if you want to write fate tests for some of the odd files we find in the process here, this may be a good idea and might simplify future work [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Complexity theory is the science of finding the exact solution to an approximation. Benchmarking OTOH is finding an approximation of the exact signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/mov: correct 0 valued entries in stts
On 2021-12-28 12:38 am, Michael Niedermayer wrote: On Mon, Dec 27, 2021 at 11:27:10AM +0530, Gyan Doshi wrote: As per ISO 14496-12, sample duration of 0 is invalid except for the last entry. In addition, also catch 0 value for sample count. --- libavformat/mov.c | 12 1 file changed, 12 insertions(+) diff --git a/libavformat/mov.c b/libavformat/mov.c index 2aed6e80ef..fb7406cdd6 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -2968,6 +2968,18 @@ static int mov_read_stts(MOVContext *c, AVIOContext *pb, MOVAtom atom) av_log(c->fc, AV_LOG_TRACE, "sample_count=%d, sample_duration=%d\n", sample_count, sample_duration); +if (!sample_count) { +av_log(c->fc, AV_LOG_WARNING, "invalid sample count of 0 in stts for st %d at entry %u; changing to 1.\n", + c->fc->nb_streams-1, i); +sc->stts_data[i].count = sample_count = 1; +} + +if (!sample_duration && i != entries-1) { +av_log(c->fc, AV_LOG_WARNING, "invalid sample delta of 0 in stts for st %d at entry %u; changing to 1.\n", + c->fc->nb_streams-1, i); +sc->stts_data[i].duration = sample_duration = 1; +} + duration+=(int64_t)sample_duration*(uint64_t)sample_count; total_sample_count+=sample_count; This does not produce the same output tickets/2096/m.f4v videos/stretch.mov (2344 matches for "invalid" after this patch) tickets/976/CodecCopyFailing.mp4 But there are many more, some maybe even generated by FFmpeg Where do I find these files? Taking a step back, the problem started with 203b0e3561dea1ec459be226d805abe73e7535e5 which broke a real world file which was outside the specification Just to clarify, it did not break that file. That file uses stts in an unusual way. Before 2015, lavf exported packets with the same desync as the other demuxers do so till today. Andreas' patch added a hack to make it play in sync. My patch 203b0e356 broke that hack. The patch for max_stts_delta is a way to restore it back. you then suggested a fix which crashed with some fuzzed files which where outside the specification and now this fix on top which changes real world files which are outside the specification I think, maybe you should consider the "outside the specification" more. The code above directly and intentionally changes values. So as a reviewer i have to ask the obvious, is that change a bugfix or a bug ? Not surprising that the output of out-of-spec files is different - that's expected, intended and trivial. It would be a bug if in-spec files were treated differently. FATE passes. If there's a specific / "correct" playback for these files like sync issues, I'll see if I can restore it. Regards, Gyan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl
> -Original Message- > From: ffmpeg-devel On Behalf Of Mark > Thompson > Sent: Monday, December 27, 2021 7:51 PM > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug > for mapping qsv frame to opencl > > On 16/11/2021 08:16, Wenbin Chen wrote: > > From: nyanmisaka > > > > mfxHDLPair was added to qsv, so modify qsv->opencl map function as well. > > Now the following commandline works: > > > > ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \ > > -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device > ocl \ > > -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \ > > -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, > \ > > hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \ > > -c:v h264_qsv output.264 > > > > Signed-off-by: nyanmisaka > > Signed-off-by: Wenbin Chen > > --- > > libavutil/hwcontext_opencl.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c > > index 26a3a24593..4b6e74ff6f 100644 > > --- a/libavutil/hwcontext_opencl.c > > +++ b/libavutil/hwcontext_opencl.c > > @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext > *dst_fc, AVFrame *dst, > > #if CONFIG_LIBMFX > > if (src->format == AV_PIX_FMT_QSV) { > > mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3]; > > -va_surface = *(VASurfaceID*)mfx_surface->Data.MemId; > > +mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId; > > +va_surface = *(VASurfaceID*)pair->first; > > } else > > #endif > > if (src->format == AV_PIX_FMT_VAAPI) { > > Since these frames can be user-supplied, this implies that the user-facing > API/ABI for AV_PIX_FMT_QSV has changed. > > It looks like this was broken by using HDLPairs when D3D11 was introduced, > which silently changed the existing API for DXVA2 and VAAPI as well. > > Could someone related to that please document it properly (clearly not all > possible valid mfxFrameSurface1s are allowed), and note in APIchanges when > the API change happened? Hi Mark, QSV contexts always need to be backed by a child context, which can be DXVA2, D3D11VA or VAAPI. You can create a QSV context either by deriving from one of those contexts or when create a new QSV context, it automatically creates an appropriate child context - either implicitly (auto mode) or explicitly, like the ffmpeg implementation does in most cases. When working with "user-supplied" frames on Linux, you need to create a VAAPI context with those frames and derive a QSV context from that context. There is no way to create or supply QSV frames directly. Looking at the code: > *mfx_surface = (mfxFrameSurface1*)src->data[3]; A QSV frames context is using the mfxFrameSurface1 structure for describing the individual frames and mfxFrameSurface1 can only come from the MSDK runtime, it cannot be user-supplied. I don't think that there's something that needs to be documented because whatever user-side manipulation an API consumer would want to perform, it would always need to derive the context either from QSV to D3D/VAAPI or from D3D to VAAPI in order to access and manipulate individual frames. Kind regards, softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl
On 27/12/2021 20:31, Soft Works wrote:>> -Original Message- From: ffmpeg-devel On Behalf Of Mark Thompson Sent: Monday, December 27, 2021 7:51 PM To: ffmpeg-devel@ffmpeg.org Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl On 16/11/2021 08:16, Wenbin Chen wrote: From: nyanmisaka mfxHDLPair was added to qsv, so modify qsv->opencl map function as well. Now the following commandline works: ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \ -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device ocl \ -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv \ -i input.264 -vf "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, \ hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \ -c:v h264_qsv output.264 Signed-off-by: nyanmisaka Signed-off-by: Wenbin Chen --- libavutil/hwcontext_opencl.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c index 26a3a24593..4b6e74ff6f 100644 --- a/libavutil/hwcontext_opencl.c +++ b/libavutil/hwcontext_opencl.c @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext *dst_fc, AVFrame *dst, #if CONFIG_LIBMFX if (src->format == AV_PIX_FMT_QSV) { mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src->data[3]; -va_surface = *(VASurfaceID*)mfx_surface->Data.MemId; +mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId; +va_surface = *(VASurfaceID*)pair->first; } else #endif if (src->format == AV_PIX_FMT_VAAPI) { Since these frames can be user-supplied, this implies that the user-facing API/ABI for AV_PIX_FMT_QSV has changed. It looks like this was broken by using HDLPairs when D3D11 was introduced, which silently changed the existing API for DXVA2 and VAAPI as well. Could someone related to that please document it properly (clearly not all possible valid mfxFrameSurface1s are allowed), and note in APIchanges when the API change happened? Hi Mark, QSV contexts always need to be backed by a child context, which can be DXVA2, D3D11VA or VAAPI. You can create a QSV context either by deriving from one of those contexts or when create a new QSV context, it automatically creates an appropriate child context - either implicitly (auto mode) or explicitly, like the ffmpeg implementation does in most cases. ... or by using the one the user supplies when they create it. When working with "user-supplied" frames on Linux, you need to create a VAAPI context with those frames and derive a QSV context from that context. There is no way to create or supply QSV frames directly. ??? The ability for the user to set up their own version of these things is literally the whole point of the split alloc/init API. // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this code doesn't need to care about it. // It has a session and creates some surfaces to use with MemId filled compatible with ffmpeg. user_session = ...; user_surfaces = ...; // No ffmpeg involved before this, now we want to pass these surfaces we've got into ffmpeg. // Create a device context using the existing session. mfx_ctx = av_hwdevice_ctx_alloc(MFX); dc = mfx_ctx->data; mfx_dc = dc->hwctx; mfx_dc->session = user_session; av_hwdevice_ctx_init(mfx_ctx); // Create a frames context out of the surfaces we've got. mfx_frames = av_hwframes_ctx_alloc(mfx_ctx); fc = mfx_frames->data; fc.pool = user_surfaces.allocator; fc.width = user_surfaces.width; // etc. mfx_fc = fc->hwctx; mfx_fc.surfaces = user_surfaces.array; mfx_fc.nb_surfaces = user_surfaces.count; mfx_fc.frame_type = user_surfaces.memtype; av_hwframe_ctx_init(frames); // Do stuff with frames. Looking at the code: *mfx_surface = (mfxFrameSurface1*)src->data[3]; A QSV frames context is using the mfxFrameSurface1 structure for describing the individual frames and mfxFrameSurface1 can only come from the MSDK runtime, it cannot be user-supplied. I don't think that there's something that needs to be documented because whatever user-side manipulation an API consumer would want to perform, it would always need to derive the context either from QSV to D3D/VAAPI or from D3D to VAAPI in order to access and manipulate individual frames. - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avformat/mov: correct 0 valued entries in stts
On Tue, Dec 28, 2021 at 01:33:54AM +0530, Gyan Doshi wrote: > > > On 2021-12-28 12:38 am, Michael Niedermayer wrote: > > On Mon, Dec 27, 2021 at 11:27:10AM +0530, Gyan Doshi wrote: > > > As per ISO 14496-12, sample duration of 0 is invalid except for > > > the last entry. > > > > > > In addition, also catch 0 value for sample count. > > > --- > > > libavformat/mov.c | 12 > > > 1 file changed, 12 insertions(+) > > > > > > diff --git a/libavformat/mov.c b/libavformat/mov.c > > > index 2aed6e80ef..fb7406cdd6 100644 > > > --- a/libavformat/mov.c > > > +++ b/libavformat/mov.c > > > @@ -2968,6 +2968,18 @@ static int mov_read_stts(MOVContext *c, > > > AVIOContext *pb, MOVAtom atom) > > > av_log(c->fc, AV_LOG_TRACE, "sample_count=%d, > > > sample_duration=%d\n", > > > sample_count, sample_duration); > > > +if (!sample_count) { > > > +av_log(c->fc, AV_LOG_WARNING, "invalid sample count of 0 in stts > > > for st %d at entry %u; changing to 1.\n", > > > + c->fc->nb_streams-1, i); > > > +sc->stts_data[i].count = sample_count = 1; > > > +} > > > + > > > +if (!sample_duration && i != entries-1) { > > > +av_log(c->fc, AV_LOG_WARNING, "invalid sample delta of 0 in stts > > > for st %d at entry %u; changing to 1.\n", > > > + c->fc->nb_streams-1, i); > > > +sc->stts_data[i].duration = sample_duration = 1; > > > +} > > > + > > > duration+=(int64_t)sample_duration*(uint64_t)sample_count; > > > total_sample_count+=sample_count; > > This does not produce the same output > > tickets/2096/m.f4v > > > > videos/stretch.mov (2344 matches for "invalid" after this patch) > > > > tickets/976/CodecCopyFailing.mp4 > > > > But there are many more, some maybe even generated by FFmpeg > > Where do I find these files? https://samples.ffmpeg.org/ffmpeg-bugs/trac/ticket976/CodecCopyFailing.mp4 https://samples.ffmpeg.org/ffmpeg-bugs/trac/ticket2096/m.f4v i failed to find the 3rd online > > > Taking a step back, the problem started with > > 203b0e3561dea1ec459be226d805abe73e7535e5 > > which broke a real world file which was outside the specification > > Just to clarify, it did not break that file. That file uses stts in an > unusual way. > Before 2015, lavf exported packets with the same desync as the other > demuxers do so till today. > Andreas' patch added a hack to make it play in sync. My patch 203b0e356 > broke that hack. > The patch for max_stts_delta is a way to restore it back. > > > you then suggested a fix which crashed with some fuzzed files which > > where outside the specification > > > > and now this fix on top which changes real world files which > > are outside the specification > > > > I think, maybe you should consider the "outside the specification" > > more. The code above directly and intentionally changes values. > > So as a reviewer i have to ask the obvious, is that change a > > bugfix or a bug ? > > Not surprising that the output of out-of-spec files is different - that's > expected, intended and trivial. > It would be a bug if in-spec files were treated differently. FATE passes. > > If there's a specific / "correct" playback for these files like sync issues, > I'll see if I can restore it. First we need to find cases that broke. I certainly will not find every If a patch is writen with the goal "dont break any file" it would be easy But you said that changes are "expected, intended" so then my question as reviewer would be what about these expected changes ? Did it change any real files output? did it fix a bug? What is the idea behind the change ? please correct me if iam wrong but Here it seems you dont care what happens with changed files unless someone else finds such a file and reports it. (if its not in spec) And the idea seems that 0 is inconvenient so you change it to 1 Its not that 0 could fundamentally not be intended to mean 0 We are before a release and id like to fix the regression ATM objectively the only option i have is reverting 203b0e3561dea1ec459be226d805abe73e7535e5 can you provide another option ? something that fixes the regression without breaking something else ? PS: if you need random testfiles for testing arbitrary changes samples.ffmpeg.org should have alot and is also accessible with rsync so you dont need to wait and hope someone will spot an issue. thanks [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB You can kill me, but you cannot change the truth. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] 5.0 release
On Wed, Dec 22, 2021 at 05:44:42PM +0100, Jean-Baptiste Kempf wrote: > On Wed, 22 Dec 2021, at 15:05, James Almer wrote: > > Is the December target to get into the feature freeze schedule from > > distros? > > No, it was set by me, in order to get the distro freezes from January. > > We can miss the target a bit this year, and then make it better for 2022. as you seem to know the distro freeze shedules can you clarify "a bit" ? iam asking just in case the channel patch doesnt make it before so i know when its time to stop waiting for it thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The smallest minority on earth is the individual. Those who deny individual rights cannot claim to be defenders of minorities. - Ayn Rand signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug for mapping qsv frame to opencl
> -Original Message- > From: ffmpeg-devel On Behalf Of Mark > Thompson > Sent: Tuesday, December 28, 2021 12:46 AM > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a bug > for mapping qsv frame to opencl > > On 27/12/2021 20:31, Soft Works wrote:>> -Original Message- > >> From: ffmpeg-devel On Behalf Of Mark > >> Thompson > >> Sent: Monday, December 27, 2021 7:51 PM > >> To: ffmpeg-devel@ffmpeg.org > >> Subject: Re: [FFmpeg-devel] [PATCH 3/3] libavutil/hwcontext_opencl: fix a > bug > >> for mapping qsv frame to opencl > >> > >> On 16/11/2021 08:16, Wenbin Chen wrote: > >>> From: nyanmisaka > >>> > >>> mfxHDLPair was added to qsv, so modify qsv->opencl map function as well. > >>> Now the following commandline works: > >>> > >>> ffmpeg -v verbose -init_hw_device vaapi=va:/dev/dri/renderD128 \ > >>> -init_hw_device qsv=qs@va -init_hw_device opencl=ocl@va -filter_hw_device > >> ocl \ > >>> -hwaccel qsv -hwaccel_output_format qsv -hwaccel_device qs -c:v h264_qsv > \ > >>> -i input.264 -vf > "hwmap=derive_device=opencl,format=opencl,avgblur_opencl, > >> \ > >>> hwmap=derive_device=qsv:reverse=1:extra_hw_frames=32,format=qsv" \ > >>> -c:v h264_qsv output.264 > >>> > >>> Signed-off-by: nyanmisaka > >>> Signed-off-by: Wenbin Chen > >>> --- > >>>libavutil/hwcontext_opencl.c | 3 ++- > >>>1 file changed, 2 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/libavutil/hwcontext_opencl.c b/libavutil/hwcontext_opencl.c > >>> index 26a3a24593..4b6e74ff6f 100644 > >>> --- a/libavutil/hwcontext_opencl.c > >>> +++ b/libavutil/hwcontext_opencl.c > >>> @@ -2249,7 +2249,8 @@ static int opencl_map_from_qsv(AVHWFramesContext > >> *dst_fc, AVFrame *dst, > >>>#if CONFIG_LIBMFX > >>>if (src->format == AV_PIX_FMT_QSV) { > >>>mfxFrameSurface1 *mfx_surface = (mfxFrameSurface1*)src- > >data[3]; > >>> -va_surface = *(VASurfaceID*)mfx_surface->Data.MemId; > >>> +mfxHDLPair *pair = (mfxHDLPair*)mfx_surface->Data.MemId; > >>> +va_surface = *(VASurfaceID*)pair->first; > >>>} else > >>>#endif > >>>if (src->format == AV_PIX_FMT_VAAPI) { > >> > >> Since these frames can be user-supplied, this implies that the user-facing > >> API/ABI for AV_PIX_FMT_QSV has changed. > >> > >> It looks like this was broken by using HDLPairs when D3D11 was introduced, > >> which silently changed the existing API for DXVA2 and VAAPI as well. > >> > >> Could someone related to that please document it properly (clearly not all > >> possible valid mfxFrameSurface1s are allowed), and note in APIchanges when > >> the API change happened? > > > > Hi Mark, > > > > QSV contexts always need to be backed by a child context, which can be > DXVA2, > > D3D11VA or VAAPI. You can create a QSV context either by deriving from one > of > > those contexts or when create a new QSV context, it automatically creates > an > > appropriate child context - either implicitly (auto mode) or explicitly, > like > > the ffmpeg implementation does in most cases. > > ... or by using the one the user supplies when they create it. > > > When working with "user-supplied" frames on Linux, you need to create a > VAAPI > > context with those frames and derive a QSV context from that context. > > > > There is no way to create or supply QSV frames directly. > > ??? The ability for the user to set up their own version of these things is > literally the whole point of the split alloc/init API. > > > // Some user stuff involving libmfx - has a D3D or VAAPI backing, but this > code doesn't need to care about it. > > // It has a session and creates some surfaces to use with MemId filled > compatible with ffmpeg. > user_session = ...; > user_surfaces = ...; > > // No ffmpeg involved before this, now we want to pass these surfaces we've > got into ffmpeg. > > // Create a device context using the existing session. > > mfx_ctx = av_hwdevice_ctx_alloc(MFX); > > dc = mfx_ctx->data; > mfx_dc = dc->hwctx; > mfx_dc->session = user_session; > > av_hwdevice_ctx_init(mfx_ctx); > > // Create a frames context out of the surfaces we've got. > > mfx_frames = av_hwframes_ctx_alloc(mfx_ctx); > > fc = mfx_frames->data; > fc.pool = user_surfaces.allocator; > fc.width = user_surfaces.width; > // etc. > > mfx_fc = fc->hwctx; > mfx_fc.surfaces = user_surfaces.array; > mfx_fc.nb_surfaces = user_surfaces.count; > mfx_fc.frame_type = user_surfaces.memtype; > > av_hwframe_ctx_init(frames); > > // Do stuff with frames. I wouldn't consider an mfxSession as an entity that could or should be shared between implementations. IMO, this is not a valid use case. A consumer of the mfx API needs to make certain choices regarding the usage of the API, one of which is the way how frames are allocated and managed. This is not something that is meant to be shared between implementations. Even inside ffmpeg, we don't use a single mfx session. We use separate sessions for d
Re: [FFmpeg-devel] [PATCH v2 1/2] avcodec/libx26[45]: add udu_sei option to import user data unregistered SEIs
On Sat, Dec 25, 2021 at 10:46:52PM +0800, lance.lmw...@gmail.com wrote: > From: Limin Wang > > Most of user data unregistered SEIs are privated data which defined by user/ > encoder. currently, the user data unregistered SEIs found in input are > forwarded > as side-data to encoders directly, it'll cause the reencoded output including > some > useless UDU SEIs. > > I prefer to add one option to enable/disable it and default is off after I saw > the patch by Andreas Rheinhardt: > > https://patchwork.ffmpeg.org/project/ffmpeg/patch/am7pr03mb66607c2db65e1ad49d975cf18f...@am7pr03mb6660.eurprd03.prod.outlook.com/ > > How to test by cli: > ffmpeg -y -f lavfi -i testsrc -c:v libx264 -frames:v 1 a.ts > ffmpeg -y -i a.ts -c:v libx264 -udu_sei 1 b.ts > ffmpeg -y -i a.ts -c:v libx264 -udu_sei 0 c.ts > > # check the user data unregistered SEIs, you'll see two UDU SEIs for b.ts. > # and mediainfo will show with wrong encoding setting info > ffmpeg -i b.ts -vf showinfo -f null - > ffmpeg -i c.ts -vf showinfo -f null - > > This fixes tickets #9500 and #9557. > --- > doc/encoders.texi| 6 ++ > libavcodec/libx264.c | 5 - > libavcodec/libx265.c | 4 > libavcodec/version.h | 2 +- > 4 files changed, 15 insertions(+), 2 deletions(-) > > diff --git a/doc/encoders.texi b/doc/encoders.texi > index 8a7589c..e3b61de 100644 > --- a/doc/encoders.texi > +++ b/doc/encoders.texi > @@ -2660,6 +2660,9 @@ ffmpeg -i foo.mpg -c:v libx264 -x264opts > keyint=123:min-keyint=20 -an out.mkv > Import closed captions (which must be ATSC compatible format) into output. > Only the mpeg2 and h264 decoders provide these. Default is 1 (on). > > +@item udu_sei @var{boolean} > +Import user data unregistered SEI if available into output. Default is 0 > (off). > + > @item x264-params (N.A.) > Override the x264 configuration using a :-separated list of key=value > parameters. > @@ -2741,6 +2744,9 @@ Quantizer curve compression factor > Normally, when forcing a I-frame type, the encoder can select any type > of I-frame. This option forces it to choose an IDR-frame. > > +@item udu_sei @var{boolean} > +Import user data unregistered SEI if available into output. Default is 0 > (off). > + > @item x265-params > Set x265 options using a list of @var{key}=@var{value} couples separated > by ":". See @command{x265 --help} for a list of options. > diff --git a/libavcodec/libx264.c b/libavcodec/libx264.c > index 2b680ab..9836818 100644 > --- a/libavcodec/libx264.c > +++ b/libavcodec/libx264.c > @@ -104,6 +104,7 @@ typedef struct X264Context { > int chroma_offset; > int scenechange_threshold; > int noise_reduction; > +int udu_sei; > > AVDictionary *x264_params; > > @@ -464,6 +465,7 @@ static int X264_frame(AVCodecContext *ctx, AVPacket *pkt, > const AVFrame *frame, > } > } > > +if (x4->udu_sei) { > for (int j = 0; j < frame->nb_side_data; j++) { > AVFrameSideData *side_data = frame->side_data[j]; > void *tmp; > @@ -487,6 +489,7 @@ static int X264_frame(AVCodecContext *ctx, AVPacket *pkt, > const AVFrame *frame, > sei_payload->payload_type = SEI_TYPE_USER_DATA_UNREGISTERED; > sei->num_payloads++; > } > +} > } > > do { > @@ -1168,7 +1171,7 @@ static const AVOption options[] = { > { "chromaoffset", "QP difference between chroma and luma", > OFFSET(chroma_offset), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, VE }, > { "sc_threshold", "Scene change threshold", > OFFSET(scenechange_threshold), AV_OPT_TYPE_INT, { .i64 = -1 }, INT_MIN, > INT_MAX, VE }, > { "noise_reduction", "Noise reduction", > OFFSET(noise_reduction), AV_OPT_TYPE_INT, { .i64 = -1 }, INT_MIN, INT_MAX, VE > }, > - > +{ "udu_sei", "Use user data unregistered SEI if available", > OFFSET(udu_sei), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, > { "x264-params", "Override the x264 configuration using a :-separated > list of key=value parameters", OFFSET(x264_params), AV_OPT_TYPE_DICT, { 0 }, > 0, 0, VE }, > { NULL }, > }; > diff --git a/libavcodec/libx265.c b/libavcodec/libx265.c > index 7dd70a3..47d0103 100644 > --- a/libavcodec/libx265.c > +++ b/libavcodec/libx265.c > @@ -54,6 +54,7 @@ typedef struct libx265Context { > > void *sei_data; > int sei_data_size; > +int udu_sei; > > /** > * If the encoder does not support ROI then warn the first time we > @@ -543,6 +544,7 @@ static int libx265_encode_frame(AVCodecContext *avctx, > AVPacket *pkt, > memcpy(x265pic.userData, &pic->reordered_opaque, > sizeof(pic->reordered_opaque)); > } > > +if (ctx->udu_sei) { > for (i = 0; i < pic->nb_side_data; i++) { > AVFrameSideData *side_data = pic->side_data[i]; > void *tmp; > @@ -568,6 +570,7 @@ static int libx265_encode_
Re: [FFmpeg-devel] [PATCH 1/3] libavcodec/vaapi_encode: Change the way to call async to increase performance
> On 27/10/2021 09:57, Wenbin Chen wrote: > > Fix: #7706. After commit 5fdcf85bbffe7451c2, vaapi encoder's performance > > decrease. The reason is that vaRenderPicture() and vaSyncSurface() are > > called at the same time (vaRenderPicture() always followed by a > > vaSyncSurface()). When we encode stream with B frames, we need buffer > to > > reorder frames, so we can send serveral frames to HW at once to increase > > performance. Now I changed them to be called in a > > asynchronous way, which will make better use of hardware. > > 1080p transcoding increases about 17% fps on my environment. > > > > Signed-off-by: Wenbin Chen > > --- > > libavcodec/vaapi_encode.c | 41 --- > > libavcodec/vaapi_encode.h | 3 +++ > > 2 files changed, 33 insertions(+), 11 deletions(-) > > The API does not allow this behaviour. > > For some bizarre reason (I think a badly-written example combined with the > Intel driver being synchronous in vaEndPicture() for a long time), the sync to > a surface is to the /input/ surface of an encode rather than the output > surface. > > That means you can't have multiple encodes outstanding on the same > surface and expect to sync usefully, because the only argument to > vaSyncSurface() is the surface to sync to without anything about the > associated context. > > Therefore trying to make it asynchronous like this falls down when input > surfaces might appear multiple times, or might be used in the input of > multiple encoders, because you can't tell whether your sync means the thing > you actually wanted to finish has finished. > > (The commit you point to above as having decreased performance fixed this > bug, since it became much more visible with decoupled send/receive.) > > So: put this change after the switch to syncing on output buffers (since that > operation does make sense for this), and leave the existing behaviour for > cases where you have to sync on the input surface. > > - Mark Thanks for your advice. It makes sense to me. I will update the patches Best Regards Wenbin > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 1/2] avcodec/libx26[45]: add udu_sei option to import user data unregistered SEIs
> On Dec 28, 2021, at 9:29 AM, lance.lmw...@gmail.com wrote: > > On Sat, Dec 25, 2021 at 10:46:52PM +0800, lance.lmw...@gmail.com wrote: >> From: Limin Wang >> >> Most of user data unregistered SEIs are privated data which defined by user/ >> encoder. currently, the user data unregistered SEIs found in input are >> forwarded >> as side-data to encoders directly, it'll cause the reencoded output >> including some >> useless UDU SEIs. >> >> I prefer to add one option to enable/disable it and default is off after I >> saw >> the patch by Andreas Rheinhardt: >> >> https://patchwork.ffmpeg.org/project/ffmpeg/patch/am7pr03mb66607c2db65e1ad49d975cf18f...@am7pr03mb6660.eurprd03.prod.outlook.com/ >> >> How to test by cli: >> ffmpeg -y -f lavfi -i testsrc -c:v libx264 -frames:v 1 a.ts >> ffmpeg -y -i a.ts -c:v libx264 -udu_sei 1 b.ts >> ffmpeg -y -i a.ts -c:v libx264 -udu_sei 0 c.ts >> >> # check the user data unregistered SEIs, you'll see two UDU SEIs for b.ts. >> # and mediainfo will show with wrong encoding setting info >> ffmpeg -i b.ts -vf showinfo -f null - >> ffmpeg -i c.ts -vf showinfo -f null - >> >> This fixes tickets #9500 and #9557. >> --- >> doc/encoders.texi| 6 ++ >> libavcodec/libx264.c | 5 - >> libavcodec/libx265.c | 4 >> libavcodec/version.h | 2 +- >> 4 files changed, 15 insertions(+), 2 deletions(-) >> >> diff --git a/doc/encoders.texi b/doc/encoders.texi >> index 8a7589c..e3b61de 100644 >> --- a/doc/encoders.texi >> +++ b/doc/encoders.texi >> @@ -2660,6 +2660,9 @@ ffmpeg -i foo.mpg -c:v libx264 -x264opts >> keyint=123:min-keyint=20 -an out.mkv >> Import closed captions (which must be ATSC compatible format) into output. >> Only the mpeg2 and h264 decoders provide these. Default is 1 (on). >> >> +@item udu_sei @var{boolean} >> +Import user data unregistered SEI if available into output. Default is 0 >> (off). >> + >> @item x264-params (N.A.) >> Override the x264 configuration using a :-separated list of key=value >> parameters. >> @@ -2741,6 +2744,9 @@ Quantizer curve compression factor >> Normally, when forcing a I-frame type, the encoder can select any type >> of I-frame. This option forces it to choose an IDR-frame. >> >> +@item udu_sei @var{boolean} >> +Import user data unregistered SEI if available into output. Default is 0 >> (off). >> + >> @item x265-params >> Set x265 options using a list of @var{key}=@var{value} couples separated >> by ":". See @command{x265 --help} for a list of options. >> diff --git a/libavcodec/libx264.c b/libavcodec/libx264.c >> index 2b680ab..9836818 100644 >> --- a/libavcodec/libx264.c >> +++ b/libavcodec/libx264.c >> @@ -104,6 +104,7 @@ typedef struct X264Context { >> int chroma_offset; >> int scenechange_threshold; >> int noise_reduction; >> +int udu_sei; >> >> AVDictionary *x264_params; >> >> @@ -464,6 +465,7 @@ static int X264_frame(AVCodecContext *ctx, AVPacket >> *pkt, const AVFrame *frame, >> } >> } >> >> +if (x4->udu_sei) { >> for (int j = 0; j < frame->nb_side_data; j++) { >> AVFrameSideData *side_data = frame->side_data[j]; >> void *tmp; >> @@ -487,6 +489,7 @@ static int X264_frame(AVCodecContext *ctx, AVPacket >> *pkt, const AVFrame *frame, >> sei_payload->payload_type = SEI_TYPE_USER_DATA_UNREGISTERED; >> sei->num_payloads++; >> } >> +} >> } >> >> do { >> @@ -1168,7 +1171,7 @@ static const AVOption options[] = { >> { "chromaoffset", "QP difference between chroma and luma", >> OFFSET(chroma_offset), AV_OPT_TYPE_INT, { .i64 = 0 }, INT_MIN, INT_MAX, VE }, >> { "sc_threshold", "Scene change threshold", >> OFFSET(scenechange_threshold), AV_OPT_TYPE_INT, { .i64 = -1 }, INT_MIN, >> INT_MAX, VE }, >> { "noise_reduction", "Noise reduction", >> OFFSET(noise_reduction), AV_OPT_TYPE_INT, { .i64 = -1 }, INT_MIN, INT_MAX, >> VE }, >> - >> +{ "udu_sei", "Use user data unregistered SEI if available", >> OFFSET(udu_sei), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }, >> { "x264-params", "Override the x264 configuration using a :-separated >> list of key=value parameters", OFFSET(x264_params), AV_OPT_TYPE_DICT, { 0 }, >> 0, 0, VE }, >> { NULL }, >> }; >> diff --git a/libavcodec/libx265.c b/libavcodec/libx265.c >> index 7dd70a3..47d0103 100644 >> --- a/libavcodec/libx265.c >> +++ b/libavcodec/libx265.c >> @@ -54,6 +54,7 @@ typedef struct libx265Context { >> >> void *sei_data; >> int sei_data_size; >> +int udu_sei; >> >> /** >> * If the encoder does not support ROI then warn the first time we >> @@ -543,6 +544,7 @@ static int libx265_encode_frame(AVCodecContext *avctx, >> AVPacket *pkt, >> memcpy(x265pic.userData, &pic->reordered_opaque, >> sizeof(pic->reordered_opaque)); >> } >> >> +if (ctx->udu_sei) { >> for (i = 0; i < pic->nb_side_da
Re: [FFmpeg-devel] [FFmpeg-cvslog] libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan frame
> On 10/12/2021 16:05, Wenbin Chen wrote: > > ffmpeg | branch: master | Wenbin Chen | Tue > Dec 7 17:05:50 2021 +0800| [f3c9847c2754b7a43eb721c95e356a53085c2491] > | committer: Lynne > > > > libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan > frame > > > > Vulkan will map nv12 to R8 and GR88, so add this map to vaapi to support > > vulkan frame. > > > > Signed-off-by: Wenbin Chen > > > >> > http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=f3c9847c2754b7a > 43eb721c95e356a53085c2491 > > --- > > > > libavutil/hwcontext_vaapi.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c > > index 75acc851d6..994b744e4d 100644 > > --- a/libavutil/hwcontext_vaapi.c > > +++ b/libavutil/hwcontext_vaapi.c > > @@ -992,6 +992,7 @@ static const struct { > > } vaapi_drm_format_map[] = { > > #ifdef DRM_FORMAT_R8 > > DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_RG88), > > +DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_GR88), > > #endif > > DRM_MAP(NV12, 1, DRM_FORMAT_NV12), > > #if defined(VA_FOURCC_P010) && defined(DRM_FORMAT_R16) > > This looks very shady. Shouldn't one or the other of these be NV21, with the > second plane VU rather than UV? > > - Mark I add this because I see vulkan map RG88 and GR88 to the same format. ``` { DRM_FORMAT_GR88, VK_FORMAT_R8G8_UNORM }, { DRM_FORMAT_RG88, VK_FORMAT_R8G8_UNORM }, ``` I thinks you are right. One of them should be NV21. I should switch the position Of GR88 and RG88 in this map table so that VK_FORMAT_R8G8_UNORM can be mapped to DRM_FORMAT_RG88 rather than DRM_FORMAT_GR88. > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [FFmpeg-cvslog] libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan frame
On Tue, 2021-12-28 at 03:57 +, Chen, Wenbin wrote: > > On 10/12/2021 16:05, Wenbin Chen wrote: > > > ffmpeg | branch: master | Wenbin Chen | Tue > > > > Dec 7 17:05:50 2021 +0800| [f3c9847c2754b7a43eb721c95e356a53085c2491] > > > committer: Lynne > > > > > > libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan > > > > frame > > > > > > Vulkan will map nv12 to R8 and GR88, so add this map to vaapi to support > > > vulkan frame. > > > > > > Signed-off-by: Wenbin Chen > > > > > > > > > > > http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=f3c9847c2754b7a > > 43eb721c95e356a53085c2491 > > > --- > > > > > > libavutil/hwcontext_vaapi.c | 1 + > > > 1 file changed, 1 insertion(+) > > > > > > diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c > > > index 75acc851d6..994b744e4d 100644 > > > --- a/libavutil/hwcontext_vaapi.c > > > +++ b/libavutil/hwcontext_vaapi.c > > > @@ -992,6 +992,7 @@ static const struct { > > > } vaapi_drm_format_map[] = { > > > #ifdef DRM_FORMAT_R8 > > > DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_RG88), > > > +DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_GR88), > > > #endif > > > DRM_MAP(NV12, 1, DRM_FORMAT_NV12), > > > #if defined(VA_FOURCC_P010) && defined(DRM_FORMAT_R16) > > > > This looks very shady. Shouldn't one or the other of these be NV21, with > > the > > second plane VU rather than UV? > > > > - Mark > > I add this because I see vulkan map RG88 and GR88 to the same format. > ``` > { DRM_FORMAT_GR88, VK_FORMAT_R8G8_UNORM }, > { DRM_FORMAT_RG88, VK_FORMAT_R8G8_UNORM }, > ``` > I thinks you are right. One of them should be NV21. I should switch the > position > Of GR88 and RG88 in this map table so that VK_FORMAT_R8G8_UNORM can be > mapped to DRM_FORMAT_RG88 rather than DRM_FORMAT_GR88. Changing the mapping will have other issues, e.g. another hw context supports nv21. I think the root cause is nv12 and nv21 have the same VkFormat in vk_pixfmt_map[]. { AV_PIX_FMT_NV12, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM }}, { AV_PIX_FMT_NV21, { VK_FORMAT_R8_UNORM, VK_FORMAT_R8G8_UNORM }}, So the pixel format should be taken into account when mapping drm format and vulkan format Thanks Haihao ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v15 1/2] avformat/imf: Demuxer
On 27/12/21 10:47, p...@sandflow.com wrote: From: Pierre-Anthony Lemieux Signed-off-by: Pierre-Anthony Lemieux --- Notes: The IMF demuxer accepts as input an IMF CPL. The assets referenced by the CPL can be contained in multiple deliveries, each defined by an ASSETMAP file: ffmpeg -assetmaps ,,... -i If -assetmaps is not specified, FFMPEG looks for a file called ASSETMAP.xml in the same directory as the CPL. EXAMPLE: ffmpeg -i http://ffmpeg-imf-samples-public.s3-website-us-west-1.amazonaws.com/countdown/CPL_f5095caa-f204-4e1c-8a84-7af48c7ae16b.xml out.mp4 The Interoperable Master Format (IMF) is a file-based media format for the delivery and storage of professional audio-visual masters. An IMF Composition consists of an XML playlist (the Composition Playlist) and a collection of MXF files (the Track Files). The Composition Playlist (CPL) assembles the Track Files onto a timeline, which consists of multiple tracks. The location of the Track Files referenced by the Composition Playlist is stored in one or more XML documents called Asset Maps. More details at https://www.imfug.com/explainer. The IMF standard was first introduced in 2013 and is managed by the SMPTE. CHANGE NOTES: - improve code style MAINTAINERS | 1 + configure| 3 +- doc/demuxers.texi| 6 + libavformat/Makefile | 1 + libavformat/allformats.c | 1 + libavformat/imf.h| 207 + libavformat/imf_cpl.c| 841 libavformat/imfdec.c | 899 +++ 8 files changed, 1958 insertions(+), 1 deletion(-) create mode 100644 libavformat/imf.h create mode 100644 libavformat/imf_cpl.c create mode 100644 libavformat/imfdec.c Both patches lgtm, I'll apply in a few days if no objections. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH] avcodec: add ADPCM IMA Ubisoft decoder
A simple, interleaved variant, but with initial state and extra, uncompressed samples. Found in Ubisoft soundbanks from early-2000's games (Splinter Cell, RS3, etc.) Signed-off-by: Zane van Iperen --- Changelog | 1 + doc/general_contents.texi | 1 + libavcodec/Makefile | 1 + libavcodec/adpcm.c| 69 +++ libavcodec/allcodecs.c| 1 + libavcodec/codec_desc.c | 7 libavcodec/codec_id.h | 1 + 7 files changed, 81 insertions(+) diff --git a/Changelog b/Changelog index edb4152d0f..58be0b9da5 100644 --- a/Changelog +++ b/Changelog @@ -44,6 +44,7 @@ version : - yadif_videotoolbox filter - VideoToolbox ProRes encoder - anlmf audio filter +- ADPCM IMA Ubisoft decoder version 4.4: diff --git a/doc/general_contents.texi b/doc/general_contents.texi index df1692c8df..80506e8ab4 100644 --- a/doc/general_contents.texi +++ b/doc/general_contents.texi @@ -1139,6 +1139,7 @@ following image formats are supported: @item ADPCM IMA High Voltage Software ALP @tab X @tab X @item ADPCM IMA QuickTime@tab X @tab X @item ADPCM IMA Simon & Schuster Interactive @tab X @tab X +@item ADPCM IMA Ubisoft @tab @tab X @item ADPCM IMA Ubisoft APM @tab X @tab X @item ADPCM IMA Loki SDL MJPEG @tab @tab X @item ADPCM IMA WAV @tab X @tab X diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 9577062eec..52839e1994 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -899,6 +899,7 @@ OBJS-$(CONFIG_ADPCM_IMA_RAD_DECODER) += adpcm.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_SSI_DECODER) += adpcm.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_SSI_ENCODER) += adpcmenc.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_SMJPEG_DECODER) += adpcm.o adpcm_data.o +OBJS-$(CONFIG_ADPCM_IMA_UBISOFT_DECODER) += adpcm.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_WAV_DECODER) += adpcm.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_WAV_ENCODER) += adpcmenc.o adpcm_data.o OBJS-$(CONFIG_ADPCM_IMA_WS_DECODER) += adpcm.o adpcm_data.o diff --git a/libavcodec/adpcm.c b/libavcodec/adpcm.c index cfde5f58b9..410fea8e21 100644 --- a/libavcodec/adpcm.c +++ b/libavcodec/adpcm.c @@ -239,6 +239,7 @@ static const int8_t mtf_index_table[16] = { typedef struct ADPCMDecodeContext { ADPCMChannelStatus status[14]; int vqa_version;/**< VQA version. Used for ADPCM_IMA_WS */ +int extra_count;/**< Number of raw PCM samples to send */ int has_status; /**< Status flag. Reset to 0 after a flush. */ } ADPCMDecodeContext; @@ -301,6 +302,13 @@ static av_cold int adpcm_decode_init(AVCodecContext * avctx) if (avctx->bits_per_coded_sample != 4 || avctx->block_align != 17 * avctx->channels) return AVERROR_INVALIDDATA; break; +case AV_CODEC_ID_ADPCM_IMA_UBISOFT: +if (c->extra_count < 0) +return AVERROR_INVALIDDATA; + +if (c->extra_count > 0 && c->extra_count % avctx->channels != 0) +return AVERROR_INVALIDDATA; +break; case AV_CODEC_ID_ADPCM_ZORK: if (avctx->bits_per_coded_sample != 8) return AVERROR_INVALIDDATA; @@ -877,6 +885,10 @@ static int get_nb_samples(AVCodecContext *avctx, GetByteContext *gb, case AV_CODEC_ID_ADPCM_IMA_MTF: nb_samples = buf_size * 2 / ch; break; +/* simple 4-bit adpcm, with extra uncompressed samples */ +case AV_CODEC_ID_ADPCM_IMA_UBISOFT: +nb_samples = (buf_size * 2 + s->extra_count) / ch; +break; } if (nb_samples) return nb_samples; @@ -1460,6 +1472,35 @@ static int adpcm_decode_frame(AVCodecContext *avctx, void *data, *samples++ = adpcm_ima_qt_expand_nibble(&c->status[st], v & 0x0F); } ) /* End of CASE */ +CASE(ADPCM_IMA_UBISOFT, +if (c->extra_count) { +int offset = avctx->extradata[0] == 6 ? 36 : 28; +nb_samples -= c->extra_count / avctx->channels; + +for (uint8_t *extra = avctx->extradata + offset; c->extra_count--; extra += 2) { +if (avctx->extradata[0] == 3) +*samples++ = AV_RB16(extra); +else +*samples++ = AV_RL16(extra); +} + +/* NB: This is enforced above. */ +if (avctx->channels == 1) { +c->status[0].predictor = samples[-1]; +} else { +c->status[0].predictor = samples[-2]; +c->status[1].predictor = samples[-1]; +} + +c->extra_count = 0; +} + +for (int n = nb_samples >> (1 - st); n > 0; n--) { +int v = bytestream2_get_byteu(&gb); +*samples++ = adpcm_ima_expand_nibble(&c->status[0], v >> 4, 3); +*samples++ = adpcm_ima_expand_nibble(&c->status[st], v & 0x0F, 3); +} +) /* End of CASE */ CASE(ADPCM_IMA_APM,
Re: [FFmpeg-devel] [PATCH v1] lavc/av1dec: use frame split bsf
On Wed, 2021-12-15 at 16:06 +0800, Fei Wang wrote: > Split packed data in case of its contains multiple show frame in some > non-standard bitstream. This can benefit decoder which can decode > continuously instead of interrupt with unexpected error. > > Signed-off-by: Fei Wang > --- > This is an improvement fix for my previous patch: > https://patchwork.ffmpeg.org/project/ffmpeg/patch/20211203080920.1948453-1-fei.w.w...@intel.com/ > > libavcodec/av1dec.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/libavcodec/av1dec.c b/libavcodec/av1dec.c > index db110c50c7..09df2bf421 100644 > --- a/libavcodec/av1dec.c > +++ b/libavcodec/av1dec.c > @@ -1240,6 +1240,7 @@ const AVCodec ff_av1_decoder = { > .flush = av1_decode_flush, > .profiles = NULL_IF_CONFIG_SMALL(ff_av1_profiles), > .priv_class= &av1_class, > +.bsfs = "av1_frame_split", > .hw_configs= (const AVCodecHWConfigInternal *const []) { > #if CONFIG_AV1_DXVA2_HWACCEL > HWACCEL_DXVA2(av1), LGTM -Haihao ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".