Re: [FFmpeg-devel] [PATCH v5 00/20] clean-up QSV filters
On Mon, 2021-08-30 at 05:52 +, Soft Works wrote: > > -Original Message- > > From: ffmpeg-devel On Behalf Of > > Xiang, Haihao > > Sent: Monday, 30 August 2021 06:20 > > To: ffmpeg-devel@ffmpeg.org > > Subject: Re: [FFmpeg-devel] [PATCH v5 00/20] clean-up QSV filters > > > > On Thu, 2021-08-05 at 16:21 +, Soft Works wrote: > > > > -Original Message- > > > > From: ffmpeg-devel On Behalf Of > > > > Haihao Xiang > > > > Sent: Thursday, 5 August 2021 10:19 > > > > To: ffmpeg-devel@ffmpeg.org > > > > Cc: Haihao Xiang > > > > Subject: [FFmpeg-devel] [PATCH v5 00/20] clean-up QSV filters > > > > > > > > This patchset clean up scale_qsv and deinterlace_qsv filters, and > > > > take the > > > > two filters as the special cases of vpp_qsv, so vf_scale_qsv.c > > > > and > > > > vf_deinterlace_qsv.c can be deleted from FFmpeg. In addition, a > > > > few small > > > > features are added in this patchset. > > > > --- > > > > v5: > > > > * Rebased this patchset against the latest master branch and > > > > fixed conflicts > > > > > > > > Haihao Xiang (20): > > > > lavfi/qsv: use QSVVPPContext as base context in > > > > vf_vpp_qsv/vf_overlay_qsv > > > > lavfi/scale_qsv: simplify scale_qsv filter > > > > lavfi/scale_qsv: don't need variables for constants in FFmpeg > > > > lavfi/vpp_qsv: add "a", "dar" and "sar" variables > > > > lavfi/vpp_qsv: handle NULL pointer when evaluating an > > > > expression > > > > lavfi/vpp_qsv: allow special values for the output dimensions > > > > lavfi/vpp_qsv: factorize extra MFX configuration > > > > lavfi/vpp_qsv: allow user to set scale_mode with constant > > > > lavfi/vpp_qsv: add vpp_preinit callback > > > > lavfi/scale_qsv: re-use VPPContext for scale_qsv filter > > > > lavfi/vpp_qsv: factor common QSV filter definition > > > > lavfi/scale_qsv: add new options for scale_qsv filter > > > > lavfi/scale_qsv: add more input / output pixel formats > > > > lavfi/qsvvpp: avoid overriding the returned value > > > > lavfi/qsvvpp: set PTS for output frame > > > > lavfi/vpp_qsv: check output format string against NULL pointer > > > > lavfi/deinterlace_qsv: simplify deinterlace_qsv filter > > > > lavfi/deinterlace_qsv: re-use VPPContext for deinterlace_qsv > > > > filter > > > > lavfi/deinterlace_qsv: add async_depth option > > > > lavfi/deinterlace_qsv: add more input / output pixel formats > > > > > > > > libavfilter/Makefile | 4 +- > > > > libavfilter/qsvvpp.c | 57 ++- > > > > libavfilter/qsvvpp.h | 11 +- > > > > libavfilter/vf_deinterlace_qsv.c | 611 - > > > > -- > > > > libavfilter/vf_overlay_qsv.c | 11 +- > > > > libavfilter/vf_scale_qsv.c | 685 - > > > > -- > > > > libavfilter/vf_vpp_qsv.c | 473 + > > > > 7 files changed, 347 insertions(+), 1505 deletions(-) delete > > > > mode 100644 > > > > libavfilter/vf_deinterlace_qsv.c delete mode 100644 > > > > libavfilter/vf_scale_qsv.c > > > > > > > > -- > > > > 2.17.1 > > > > > > Hi Hihao, > > > > > > The general idea of this patch makes sense to me. > > > > > > Currently there are implementation differences between these > > > > filters, > > > and there are cases where vpp_qsv doesn't work and I need to use > > > scale_qsv instead. > > > > > > I have never analyzed the actual reason, but this should be done > > > before replacing scale_qsv with an aliased vpp_qsv. > > > > > > I'll try to dig out an example.. > > > > > > Hi Softworkz, > > > > Could you provide the cases when you have time ? I may look into the > > issues. > > > > Thanks > > Haihao > > > Hi Haihao, > > IIRC think it can be reproduced easily by using the filters without > parameters, > where scale_qsv works and vpp_qsv doesn't work. scale_qsv only supports video memory input and output however vpp_qsv may support both video memory and system memory, there is a case where scale_qsv doesn't work but vpp_qsv may work. $> ffmpeg -c:v h264_qsv -i input.mp4 -vf scale_qsv -f null - Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scale_0' Error reinitializing filters! Failed to inject frame into filter network: Function not implemented Error while processing the decoded data for stream #0:0 Conversion failed! After applying this patchset, scale_qsv may work with both video memory and system memory. In addition, vpp_qsv may support crop however scale_qsv doesn't support crop. After applying this patchset, scale_qsv may support crop too. > > If it doesn't reproduce, try this in combination with hwupload, e.g. > > - "hwupload@f1=extra_hw_frames=32,vpp_qsv" > vs. > - "hwupload@f1=extra_hw_frames=32,scale_qsv" > The above two combinations work for me Thanks Haihao ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.o
[FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal Sub2Video Code
v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p This patchset is about introducing filtering support for subtitles. The current sub2video "hack" implementation is slow and ineffective: it creates a full-size video frame from the source subtitles and that full-size frame needs to be overlaid/blended with/onto every single video frame. That's a pretty expensive operation and it needs to be performed for every single video frame, even when there's nothing to overlay at all (= no subs to show in current scene). And when there are subtitles to show, those are usually limited to smaller areas on the screen; again, there's no need to overlay a full-size image onto each frame. From a performance perspective, it would be much more efficient to overlay the individual rects onto each frame only instead of a full frame blend. >From a general perspective - even though it has always been considered as a 'hack' - the sub2video implementation has become a kind of standard behavior, despite of its shortcomings. Users have accommodated with the respective command lines and are expecting these to work. I'd expect any deviation from the current behavior to be filed as regression bugs. >From there, I had decided to go for a small-step improvement of that situation while staying inside these corner points, and the plan was simple: - keep the current sub2video behavior (specifically hearbeat) working to avoid regressions - but stop creating full video frames (with alpha) for feeding into a filtergraph (and later overlay) - instead, use refcounting for AVSubtitle and attach it to frames of (new) type AVMediaType_Subtitle - those subtitle frames can travel through a filtergraph without needing to materialize them to images first For more efficient overlay (processing the relevant rects only), I have created a new filter: overlay_subs: - Input0: video - Input1: subtitles (format: SUBTITLE_BITMAP) - Output0: video In order to keep compatibility with existing command lines, I have added another filter: sub2video (reminescent naming) sub2video creates video frames (with alpha) from an existing subtitle stream: - Input0: subrtitles (format: SUBTITLE_BITMAP) - Output0: video This filter gets auto-inserted to retain compatibility with current sub2video command lines. As AVSubtitle can carry both, textual and graphical subtitles, the ability for sending textual subtitles through a filtergraph came more or less for free. For testing purposes, I have added another filter: sleet (translates subtitles to 'L337 speak') - Input0: subtitles (format: SUBTITLE_ASS or SUBTITLE_TEXT) - Output0: subtitles (format: same as input) Why leet? Real world use is surely questionable, but it has two advantages that make it a great choice for testing: You can see from almost every single line whether it has been filtered, and the text is still readable which allows to verify that timings are still valid. Working Command Lines Using "overlay_subs" (better performance): -y -loglevel verbose -ss 00:02:30 -i INPUT -filter_complex "[0:0][0:3]overlay_subs=x=0:y=-220" output.mp4 Using regular Overlay and the new sub2video filter: -y -loglevel verbose -ss 00:02:30 -filter_threads 1 -i INPUT -filter_complex "[0:0]format=bgra[main];[0:3]sub2video[subs];[main][subs]overlay=x=0:y=-220: format=auto,format=yuv420p" output.mp4 Legacy command line compatibility: (using regular overlay; sub2video filter is inserted automatically) -y -loglevel verbose -ss 00:04:30 -filter_threads 1 -i INPUT -filter_complex "[0:0][0:3]overlay=x=0:y=-220:format=auto,format=yuv420p" output.mp4 To make this work, I roughly did the following: - Added a new property 'type' to AVFrame of type enum AVMediaType - Changed code that uses width/height for checking whether a frame is video or audio and check the 'type' property instead - Added ff_ssrc_sbuffer and ff_ssink_sbuffer filters - Changed filtergraph code to support frames with AVMediaType == AVMediaType_Subtitle - Transformed the existing sub2video code accordingly, while still keeping the heartbeat mechanism in place Next Steps - Create modified version of the ASS filter that works based on (text) subtitle filter input - Create modified EIA-608/708 filter (e.g. split_eia608) which has one video input and one video output, plus a second output of type 'subtitle' (format: text or ass) This will allow to burn-in eia subs or save them in any other subtitle format without needing the weird "movie:xxx" input filter construct. Regards, softworkz --- softworkz (8): lavu/frame: avframe add type property avfilter/subtitles: Add subtitles.c avfilter/avfilter: Handle subtitle frames avfilter/overlay_subs: Add
[FFmpeg-devel] [PATCH v2 1/8] lavu/frame: avframe add type property
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavutil/frame.c | 74 - libavutil/frame.h | 39 ++-- libavutil/version.h | 2 +- 3 files changed, 97 insertions(+), 18 deletions(-) diff --git a/libavutil/frame.c b/libavutil/frame.c index b0ceaf7145..7d95849cef 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -244,22 +244,39 @@ static int get_audio_buffer(AVFrame *frame, int align) } int av_frame_get_buffer(AVFrame *frame, int align) +{ +if (frame->width > 0 && frame->height > 0) +return av_frame_get_buffer2(frame, AVMEDIA_TYPE_VIDEO, align); +else if (frame->nb_samples > 0 && (frame->channel_layout || frame->channels > 0)) +return av_frame_get_buffer2(frame, AVMEDIA_TYPE_AUDIO, align); + +return AVERROR(EINVAL); +} + +int av_frame_get_buffer2(AVFrame *frame, enum AVMediaType type, int align) { if (frame->format < 0) return AVERROR(EINVAL); -if (frame->width > 0 && frame->height > 0) +frame->type = type; + +switch(frame->type) { +case AVMEDIA_TYPE_VIDEO: return get_video_buffer(frame, align); -else if (frame->nb_samples > 0 && (frame->channel_layout || frame->channels > 0)) +case AVMEDIA_TYPE_AUDIO: return get_audio_buffer(frame, align); - -return AVERROR(EINVAL); +case AVMEDIA_TYPE_SUBTITLE: +return 0; +default: +return AVERROR(EINVAL); +} } static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { int ret, i; +dst->type = src->type; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; dst->sample_aspect_ratio= src->sample_aspect_ratio; @@ -331,6 +348,7 @@ int av_frame_ref(AVFrame *dst, const AVFrame *src) av_assert1(dst->width == 0 && dst->height == 0); av_assert1(dst->channels == 0); +dst->type = src->type; dst->format = src->format; dst->width = src->width; dst->height = src->height; @@ -499,6 +517,7 @@ int av_frame_make_writable(AVFrame *frame) return 0; memset(&tmp, 0, sizeof(tmp)); +tmp.type = frame->type; tmp.format = frame->format; tmp.width = frame->width; tmp.height = frame->height; @@ -544,14 +563,22 @@ AVBufferRef *av_frame_get_plane_buffer(AVFrame *frame, int plane) uint8_t *data; int planes, i; -if (frame->nb_samples) { -int channels = frame->channels; -if (!channels) -return NULL; -CHECK_CHANNELS_CONSISTENCY(frame); -planes = av_sample_fmt_is_planar(frame->format) ? channels : 1; -} else +switch(frame->type) { +case AVMEDIA_TYPE_VIDEO: planes = 4; +break; +case AVMEDIA_TYPE_AUDIO: +{ +int channels = frame->channels; +if (!channels) +return NULL; +CHECK_CHANNELS_CONSISTENCY(frame); +planes = av_sample_fmt_is_planar(frame->format) ? channels : 1; +break; +} +default: +return NULL; +} if (plane < 0 || plane >= planes || !frame->extended_data[plane]) return NULL; @@ -675,17 +702,34 @@ static int frame_copy_audio(AVFrame *dst, const AVFrame *src) return 0; } +static int frame_copy_subtitles(AVFrame *dst, const AVFrame *src) +{ +dst->type = AVMEDIA_TYPE_SUBTITLE; +dst->format = src->format; + +if (src->buf[0]) { +dst->buf[0] = av_buffer_ref(src->buf[0]); +dst->data[0] = src->data[0]; +} + +return 0; +} + int av_frame_copy(AVFrame *dst, const AVFrame *src) { if (dst->format != src->format || dst->format < 0) return AVERROR(EINVAL); -if (dst->width > 0 && dst->height > 0) +switch(dst->type) { +case AVMEDIA_TYPE_VIDEO: return frame_copy_video(dst, src); -else if (dst->nb_samples > 0 && dst->channels > 0) +case AVMEDIA_TYPE_AUDIO: return frame_copy_audio(dst, src); - -return AVERROR(EINVAL); +case AVMEDIA_TYPE_SUBTITLE: +return frame_copy_subtitles(dst, src); +default: +return AVERROR(EINVAL); +} } void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type) diff --git a/libavutil/frame.h b/libavutil/frame.h index ff2540a20f..c104815df9 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -271,7 +271,7 @@ typedef struct AVRegionOfInterest { } AVRegionOfInterest; /** - * This structure describes decoded (raw) audio or video data. + * This structure describes decoded (raw) audio, video or subtitle data.
[FFmpeg-devel] [PATCH v2 2/8] avfilter/subtitles: Add subtitles.c
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavfilter/Makefile | 1 + libavfilter/f_interleave.c | 3 ++ libavfilter/internal.h | 1 + libavfilter/subtitles.c| 61 ++ libavfilter/subtitles.h| 44 +++ 5 files changed, 110 insertions(+) create mode 100644 libavfilter/subtitles.c create mode 100644 libavfilter/subtitles.h diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 102ce7beff..82a7394adb 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -19,6 +19,7 @@ OBJS = allfilters.o \ framequeue.o \ graphdump.o \ graphparser.o\ + subtitles.o \ video.o \ OBJS-$(HAVE_THREADS) += pthread.o diff --git a/libavfilter/f_interleave.c b/libavfilter/f_interleave.c index 2845d72b79..aefa9a11cb 100644 --- a/libavfilter/f_interleave.c +++ b/libavfilter/f_interleave.c @@ -32,6 +32,7 @@ #include "filters.h" #include "internal.h" #include "audio.h" +#include "subtitles.h" #include "video.h" typedef struct InterleaveContext { @@ -170,6 +171,8 @@ static av_cold int init(AVFilterContext *ctx) inpad.get_buffer.video = ff_null_get_video_buffer; break; case AVMEDIA_TYPE_AUDIO: inpad.get_buffer.audio = ff_null_get_audio_buffer; break; +case AVMEDIA_TYPE_SUBTITLE: +inpad.get_buffer.subtitle = ff_null_get_subtitles_buffer; break; default: av_assert0(0); } diff --git a/libavfilter/internal.h b/libavfilter/internal.h index a0aa32af4d..85519a1076 100644 --- a/libavfilter/internal.h +++ b/libavfilter/internal.h @@ -85,6 +85,7 @@ struct AVFilterPad { union { AVFrame *(*video)(AVFilterLink *link, int w, int h); AVFrame *(*audio)(AVFilterLink *link, int nb_samples); +AVFrame *(*subtitle)(AVFilterLink *link, int format); } get_buffer; /** diff --git a/libavfilter/subtitles.c b/libavfilter/subtitles.c new file mode 100644 index 00..90ec479e51 --- /dev/null +++ b/libavfilter/subtitles.c @@ -0,0 +1,61 @@ +/* + * Copyright (c) 2021 softworkz + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/common.h" + +#include "subtitles.h" +#include "avfilter.h" +#include "internal.h" + + +AVFrame *ff_null_get_subtitles_buffer(AVFilterLink *link, int format) +{ +return ff_get_subtitles_buffer(link->dst->outputs[0], format); +} + +AVFrame *ff_default_get_subtitles_buffer(AVFilterLink *link, int format) +{ +AVFrame *frame = NULL; + +// TODO: +//frame = ff_frame_pool_get(link->frame_pool); + +frame = av_frame_alloc(); +if (!frame) +return NULL; + +frame->format = format; +frame->type = AVMEDIA_TYPE_SUBTITLE; + +return frame; +} + +AVFrame *ff_get_subtitles_buffer(AVFilterLink *link, int format) +{ +AVFrame *ret = NULL; + +if (link->dstpad->get_buffer.subtitle) +ret = link->dstpad->get_buffer.subtitle(link, format); + +if (!ret) +ret = ff_default_get_subtitles_buffer(link, format); + +return ret; +} diff --git a/libavfilter/subtitles.h b/libavfilter/subtitles.h new file mode 100644 index 00..d3d5491652 --- /dev/null +++ b/libavfilter/subtitles.h @@ -0,0 +1,44 @@ +/* + * Copyright (c) 2021 softworkz + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope t
[FFmpeg-devel] [PATCH v2 3/8] avfilter/avfilter: Handle subtitle frames
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavfilter/avfilter.c | 8 +--- libavfilter/avfiltergraph.c | 5 + libavfilter/formats.c | 11 +++ 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/libavfilter/avfilter.c b/libavfilter/avfilter.c index ea22b247de..5505c34678 100644 --- a/libavfilter/avfilter.c +++ b/libavfilter/avfilter.c @@ -56,7 +56,8 @@ void ff_tlog_ref(void *ctx, AVFrame *ref, int end) ref->linesize[0], ref->linesize[1], ref->linesize[2], ref->linesize[3], ref->pts, ref->pkt_pos); -if (ref->width) { +switch(ref->type) { +case AVMEDIA_TYPE_VIDEO: ff_tlog(ctx, " a:%d/%d s:%dx%d i:%c iskey:%d type:%c", ref->sample_aspect_ratio.num, ref->sample_aspect_ratio.den, ref->width, ref->height, @@ -64,12 +65,13 @@ void ff_tlog_ref(void *ctx, AVFrame *ref, int end) ref->top_field_first ? 'T' : 'B',/* Top / Bottom */ ref->key_frame, av_get_picture_type_char(ref->pict_type)); -} -if (ref->nb_samples) { +break; +case AVMEDIA_TYPE_AUDIO: ff_tlog(ctx, " cl:%"PRId64"d n:%d r:%d", ref->channel_layout, ref->nb_samples, ref->sample_rate); +break; } ff_tlog(ctx, "]%s", end ? "\n" : ""); diff --git a/libavfilter/avfiltergraph.c b/libavfilter/avfiltergraph.c index 41a91a9bda..4f581bc7a6 100644 --- a/libavfilter/avfiltergraph.c +++ b/libavfilter/avfiltergraph.c @@ -328,6 +328,8 @@ static int filter_link_check_formats(void *log, AVFilterLink *link, AVFilterForm return ret; break; +case AVMEDIA_TYPE_SUBTITLE: +return 0; default: av_assert0(!"reached"); } @@ -463,6 +465,9 @@ static int query_formats(AVFilterGraph *graph, AVClass *log_ctx) if (!link) continue; +if (link->type == AVMEDIA_TYPE_SUBTITLE) +continue; + neg = ff_filter_get_negotiation(link); av_assert0(neg); for (neg_step = 1; neg_step < neg->nb; neg_step++) { diff --git a/libavfilter/formats.c b/libavfilter/formats.c index 9e39d65a3c..26d8e45263 100644 --- a/libavfilter/formats.c +++ b/libavfilter/formats.c @@ -19,6 +19,7 @@ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ +#include "libavcodec/avcodec.h" #include "libavutil/avassert.h" #include "libavutil/channel_layout.h" #include "libavutil/common.h" @@ -430,6 +431,12 @@ int ff_add_channel_layout(AVFilterChannelLayouts **l, uint64_t channel_layout) return 0; } +static int ff_add_subtitle_type(AVFilterFormats **avff, int64_t fmt) +{ +ADD_FORMAT(avff, fmt, ff_formats_unref, int, formats, nb_formats); +return 0; +} + AVFilterFormats *ff_all_formats(enum AVMediaType type) { AVFilterFormats *ret = NULL; @@ -447,6 +454,10 @@ AVFilterFormats *ff_all_formats(enum AVMediaType type) return NULL; fmt++; } +} else if (type == AVMEDIA_TYPE_SUBTITLE) { +ff_add_subtitle_type(&ret, SUBTITLE_BITMAP); +ff_add_subtitle_type(&ret, SUBTITLE_ASS); +ff_add_subtitle_type(&ret, SUBTITLE_TEXT); } return ret; -- 2.30.2.windows.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavfilter/Makefile | 1 + libavfilter/allfilters.c | 1 + libavfilter/vf_overlay_subs.c | 549 ++ libavfilter/vf_overlay_subs.h | 65 4 files changed, 616 insertions(+) create mode 100644 libavfilter/vf_overlay_subs.c create mode 100644 libavfilter/vf_overlay_subs.h diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 82a7394adb..8130f7b46c 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -357,6 +357,7 @@ OBJS-$(CONFIG_OVERLAY_CUDA_FILTER) += vf_overlay_cuda.o framesync.o vf OBJS-$(CONFIG_OVERLAY_OPENCL_FILTER) += vf_overlay_opencl.o opencl.o \ opencl/overlay.o framesync.o OBJS-$(CONFIG_OVERLAY_QSV_FILTER)+= vf_overlay_qsv.o framesync.o +OBJS-$(CONFIG_OVERLAY_SUBS_FILTER) += vf_overlay_subs.o framesync.o OBJS-$(CONFIG_OVERLAY_VULKAN_FILTER) += vf_overlay_vulkan.o vulkan.o OBJS-$(CONFIG_OWDENOISE_FILTER) += vf_owdenoise.o OBJS-$(CONFIG_PAD_FILTER)+= vf_pad.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 73040d2824..abd0a47750 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -339,6 +339,7 @@ extern const AVFilter ff_vf_oscilloscope; extern const AVFilter ff_vf_overlay; extern const AVFilter ff_vf_overlay_opencl; extern const AVFilter ff_vf_overlay_qsv; +extern const AVFilter ff_vf_overlay_subs; extern const AVFilter ff_vf_overlay_vulkan; extern const AVFilter ff_vf_overlay_cuda; extern const AVFilter ff_vf_owdenoise; diff --git a/libavfilter/vf_overlay_subs.c b/libavfilter/vf_overlay_subs.c new file mode 100644 index 00..177f9b1cc9 --- /dev/null +++ b/libavfilter/vf_overlay_subs.c @@ -0,0 +1,549 @@ +/* + * Copyright (c) 2021 softworkz + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * overlay graphical subtitles on top of a video frame + */ + +#include "avfilter.h" +#include "formats.h" +#include "libavutil/common.h" +#include "libavutil/eval.h" +#include "libavutil/avstring.h" +#include "libavutil/pixdesc.h" +#include "libavutil/imgutils.h" +#include "libavutil/opt.h" +#include "internal.h" +#include "drawutils.h" +#include "framesync.h" +#include "vf_overlay_subs.h" + +#include "libavcodec/avcodec.h" + +static const char *const var_names[] = { +"main_w","W", ///< width of the mainvideo +"main_h","H", ///< height of the mainvideo +"overlay_w", "w", ///< width of the overlay video +"overlay_h", "h", ///< height of the overlay video +"hsub", +"vsub", +"x", +"y", +"n",///< number of frame +"pos", ///< position in the file +"t",///< timestamp expressed in seconds +NULL +}; + +#define MAIN0 +#define OVERLAY 1 + +#define R 0 +#define G 1 +#define B 2 +#define A 3 + +#define Y 0 +#define U 1 +#define V 2 + +enum EvalMode { +EVAL_MODE_INIT, +EVAL_MODE_FRAME, +EVAL_MODE_NB +}; + +static av_cold void uninit(AVFilterContext *ctx) +{ +OverlaySubsContext *s = ctx->priv; + +ff_framesync_uninit(&s->fs); +av_expr_free(s->x_pexpr); s->x_pexpr = NULL; +av_expr_free(s->y_pexpr); s->y_pexpr = NULL; +} + +static inline int normalize_xy(double d, int chroma_sub) +{ +if (isnan(d)) +return INT_MAX; +return (int)d & ~((1 << chroma_sub) - 1); +} + +static void eval_expr(AVFilterContext *ctx) +{ +OverlaySubsContext *s = ctx->priv; + +s->var_values[VAR_X] = av_expr_eval(s->x_pexpr, s->var_values, NULL); +s->var_values[VAR_Y] = av_expr_eval(s->y_pexpr, s->var_values, NULL); +/* It is necessary if x is expressed from y */ +s->var_values[VAR_X] = av_expr_eval(s->x_pexpr, s->var_values, NULL); +s->x = normalize_xy(s->var_values[VAR_X], s->hsub); +s->y = normalize_xy(s->var_values[VAR_Y], s->vsub); +} + +static int set_expr(AV
[FFmpeg-devel] [PATCH v2 5/8] avfilter/sub2video: Add sub2video filter
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavfilter/Makefile| 1 + libavfilter/allfilters.c| 1 + libavfilter/svf_sub2video.c | 260 3 files changed, 262 insertions(+) create mode 100644 libavfilter/svf_sub2video.c diff --git a/libavfilter/Makefile b/libavfilter/Makefile index 8130f7b46c..e38c6b6f6d 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -542,6 +542,7 @@ OBJS-$(CONFIG_SHOWSPECTRUMPIC_FILTER)+= avf_showspectrum.o OBJS-$(CONFIG_SHOWVOLUME_FILTER) += avf_showvolume.o OBJS-$(CONFIG_SHOWWAVES_FILTER) += avf_showwaves.o OBJS-$(CONFIG_SHOWWAVESPIC_FILTER) += avf_showwaves.o +OBJS-$(CONFIG_SUB2VIDEO_FILTER) += svf_sub2video.o OBJS-$(CONFIG_SPECTRUMSYNTH_FILTER) += vaf_spectrumsynth.o # multimedia sources diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index abd0a47750..5b631b3617 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -518,6 +518,7 @@ extern const AVFilter ff_avf_showvolume; extern const AVFilter ff_avf_showwaves; extern const AVFilter ff_avf_showwavespic; extern const AVFilter ff_vaf_spectrumsynth; +extern const AVFilter ff_svf_sub2video; /* multimedia sources */ extern const AVFilter ff_avsrc_amovie; diff --git a/libavfilter/svf_sub2video.c b/libavfilter/svf_sub2video.c new file mode 100644 index 00..689c2a565c --- /dev/null +++ b/libavfilter/svf_sub2video.c @@ -0,0 +1,260 @@ +/* + * Copyright (c) 2021 softworkz + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * graphical subtitles to video conversion, based on previous sub2video + * implementation. + */ + +#include + +#include "libavutil/audio_fifo.h" +#include "libavutil/avassert.h" +#include "libavutil/avstring.h" +#include "libavutil/channel_layout.h" +#include "libavutil/opt.h" +#include "libavutil/parseutils.h" +#include "libavutil/xga_font_data.h" +#include "avfilter.h" +#include "blend.h" +#include "filters.h" +#include "internal.h" +#include "libavcodec/avcodec.h" +typedef struct Sub2VideoContext { +const AVClass *class; +int w, h; +AVFrame *outpicref; +int pixstep; +} Sub2VideoContext; + +#define OFFSET(x) offsetof(Sub2VideoContext, x) +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM + +static int alloc_out_frame(Sub2VideoContext *s2v_ctx, const int16_t *p, + const AVFilterLink *inlink, AVFilterLink *outlink, + const AVFrame *in) +{ +if (!s2v_ctx->outpicref) { +int j; +AVFrame *out = s2v_ctx->outpicref = +ff_get_video_buffer(outlink, outlink->w, outlink->h); +if (!out) +return AVERROR(ENOMEM); +out->width = outlink->w; +out->height = outlink->h; +out->pts = in->pts + av_rescale_q((p - (int16_t *)in->data[0]) / inlink->channels, + av_make_q(1, inlink->sample_rate), + outlink->time_base); +for (j = 0; j < outlink->h; j++) +memset(out->data[0] + j*out->linesize[0], 0, outlink->w * s2v_ctx->pixstep); +} +return 0; +} + + +static const AVOption sub2video_options[] = { +{ "size", "set video size", OFFSET(w), AV_OPT_TYPE_IMAGE_SIZE, {.str = "640x512"}, 0, 0, FLAGS }, +{ "s","set video size", OFFSET(w), AV_OPT_TYPE_IMAGE_SIZE, {.str = "640x512"}, 0, 0, FLAGS }, +{ NULL } +}; + +AVFILTER_DEFINE_CLASS(sub2video); + +static int query_formats(AVFilterContext *ctx) +{ +AVFilterFormats *formats = NULL; +AVFilterLink *inlink = ctx->inputs[0]; +AVFilterLink *outlink = ctx->outputs[0]; +static const enum AVSubtitleType subtitle_fmts[] = { SUBTITLE_BITMAP, SUBTITLE_NONE }; +static const enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_RGB32, AV_PI
[FFmpeg-devel] [PATCH v2 6/8] avfilter/sbuffer: Add sbuffersrv and sbuffersink filters
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p configure| 2 +- libavfilter/allfilters.c | 10 --- libavfilter/buffersink.c | 65 libavfilter/buffersink.h | 15 ++ libavfilter/buffersrc.c | 65 libavfilter/buffersrc.h | 1 + 6 files changed, 153 insertions(+), 5 deletions(-) diff --git a/configure b/configure index 9249254b70..fdb3bf1714 100755 --- a/configure +++ b/configure @@ -7715,7 +7715,7 @@ print_enabled_components(){ fi done if [ "$name" = "filter_list" ]; then -for c in asrc_abuffer vsrc_buffer asink_abuffer vsink_buffer; do +for c in asrc_abuffer vsrc_buffer ssrc_sbuffer asink_abuffer vsink_buffer ssink_sbuffer; do printf "&ff_%s,\n" $c >> $TMPH done fi diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 5b631b3617..5bd54db2c8 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -528,10 +528,12 @@ extern const AVFilter ff_avsrc_movie; * they are formatted to not be found by the grep * as they are manually added again (due to their 'names' * being the same while having different 'types'). */ -extern const AVFilter ff_asrc_abuffer; -extern const AVFilter ff_vsrc_buffer; -extern const AVFilter ff_asink_abuffer; -extern const AVFilter ff_vsink_buffer; +extern const AVFilter ff_asrc_abuffer; +extern const AVFilter ff_vsrc_buffer; +extern const AVFilter ff_ssrc_sbuffer; +extern const AVFilter ff_asink_abuffer; +extern const AVFilter ff_vsink_buffer; +extern const AVFilter ff_ssink_sbuffer; extern const AVFilter ff_af_afifo; extern const AVFilter ff_vf_fifo; diff --git a/libavfilter/buffersink.c b/libavfilter/buffersink.c index 07c4812f29..20845eb10f 100644 --- a/libavfilter/buffersink.c +++ b/libavfilter/buffersink.c @@ -57,6 +57,10 @@ typedef struct BufferSinkContext { int *sample_rates; ///< list of accepted sample rates, terminated by -1 int sample_rates_size; +/* only used for subtitles */ +enum AVSubtitleType *subtitle_types; ///< list of accepted subtitle types, must be terminated with -1 +int subtitle_types_size; + AVFrame *peeked_frame; } BufferSinkContext; @@ -168,6 +172,15 @@ AVABufferSinkParams *av_abuffersink_params_alloc(void) return NULL; return params; } + +AVSBufferSinkParams *av_sbuffersink_params_alloc(void) +{ +AVSBufferSinkParams *params = av_mallocz(sizeof(AVSBufferSinkParams)); + +if (!params) +return NULL; +return params; +} #endif static av_cold int common_init(AVFilterContext *ctx) @@ -305,6 +318,31 @@ static int asink_query_formats(AVFilterContext *ctx) return 0; } +static int ssink_query_formats(AVFilterContext *ctx) +{ +BufferSinkContext *buf = ctx->priv; +AVFilterFormats *formats = NULL; +unsigned i; +int ret; + +if ((ret = ff_default_query_formats(ctx)) < 0) +return ret; + +CHECK_LIST_SIZE(pixel_fmts) +if (buf->pixel_fmts_size) { +for (i = 0; i < NB_ITEMS(buf->pixel_fmts); i++) +if ((ret = ff_add_format(&formats, buf->pixel_fmts[i])) < 0) +return ret; +if ((ret = ff_set_common_formats(ctx, formats)) < 0) +return ret; +} else { +if ((ret = ff_default_query_formats(ctx)) < 0) +return ret; +} + +return 0; +} + #define OFFSET(x) offsetof(BufferSinkContext, x) #define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_VIDEO_PARAM static const AVOption buffersink_options[] = { @@ -322,9 +360,16 @@ static const AVOption abuffersink_options[] = { { NULL }, }; #undef FLAGS +#define FLAGS AV_OPT_FLAG_FILTERING_PARAM|AV_OPT_FLAG_SUBTITLE_PARAM +static const AVOption sbuffersink_options[] = { +{ "subtitle_types", "set the supported subtitle formats", OFFSET(subtitle_types), AV_OPT_TYPE_BINARY, .flags = FLAGS }, +{ NULL }, +}; +#undef FLAGS AVFILTER_DEFINE_CLASS(buffersink); AVFILTER_DEFINE_CLASS(abuffersink); +AVFILTER_DEFINE_CLASS(sbuffersink); static const AVFilterPad avfilter_vsink_buffer_inputs[] = { { @@ -365,3 +410,23 @@ const AVFilter ff_asink_abuffer = { .inputs= avfilter_asink_abuffer_inputs, .outputs = NULL, }; + +static const AVFilterPad avfilter_ssink_sbuffer_inputs[] = { +{ +.name = "default", +.type = AVMEDIA_TYPE_SUBTITLE, +}, +{ NULL } +}; + +AVFilter ff_ssink_sbuffer = { +.name = "sbuffersink", +.description = NULL_IF_CONFIG_SMALL("Buffer subtitle frames, and make them available to the end of
[FFmpeg-devel] [PATCH v2 7/8] avfilter/sleet: Add sleet filter
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p libavfilter/Makefile | 3 + libavfilter/allfilters.c | 1 + libavfilter/sf_sleet.c | 209 +++ 3 files changed, 213 insertions(+) create mode 100644 libavfilter/sf_sleet.c diff --git a/libavfilter/Makefile b/libavfilter/Makefile index e38c6b6f6d..25dd1276de 100644 --- a/libavfilter/Makefile +++ b/libavfilter/Makefile @@ -526,6 +526,9 @@ OBJS-$(CONFIG_YUVTESTSRC_FILTER) += vsrc_testsrc.o OBJS-$(CONFIG_NULLSINK_FILTER) += vsink_nullsink.o +# subtitle filters +OBJS-$(CONFIG_SLEET_FILTER) += sf_sleet.o + # multimedia filters OBJS-$(CONFIG_ABITSCOPE_FILTER) += avf_abitscope.o OBJS-$(CONFIG_ADRAWGRAPH_FILTER) += f_drawgraph.o diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c index 5bd54db2c8..efe16b8e1b 100644 --- a/libavfilter/allfilters.c +++ b/libavfilter/allfilters.c @@ -519,6 +519,7 @@ extern const AVFilter ff_avf_showwaves; extern const AVFilter ff_avf_showwavespic; extern const AVFilter ff_vaf_spectrumsynth; extern const AVFilter ff_svf_sub2video; +extern const AVFilter ff_sf_sleet; /* multimedia sources */ extern const AVFilter ff_avsrc_amovie; diff --git a/libavfilter/sf_sleet.c b/libavfilter/sf_sleet.c new file mode 100644 index 00..cf7701c01f --- /dev/null +++ b/libavfilter/sf_sleet.c @@ -0,0 +1,209 @@ +/* + * Copyright (c) 2021 softworkz + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/** + * @file + * text subtitle filter which translates to 'leet speak' + */ + +#include "libavutil/avassert.h" +#include "libavutil/avstring.h" +#include "libavutil/opt.h" +#include "avfilter.h" +#include "internal.h" +#include "libavcodec/avcodec.h" + +static const char* alphabet_src = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"; +static const char* alphabet_dst = "abcd3f6#1jklmn0pq257uvwxyzAB(D3F6#1JKLMN0PQ257UVWXYZ"; + + +typedef struct LeetContext { +const AVClass *class; +enum AVSubtitleType format; +} LeetContext; + +static const AVOption sleet_options[] = { +{ NULL } +}; + +AVFILTER_DEFINE_CLASS(sleet); + +static int query_formats(AVFilterContext *ctx) +{ +AVFilterFormats *formats = NULL; +AVFilterLink *inlink = ctx->inputs[0]; +AVFilterLink *outlink = ctx->outputs[0]; +static const enum AVSubtitleType subtitle_fmts[] = { SUBTITLE_ASS, SUBTITLE_NONE }; +static const enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_RGB32, AV_PIX_FMT_NONE }; +int ret; + +/* set input subtitle format */ +formats = ff_make_format_list(subtitle_fmts); +if ((ret = ff_formats_ref(formats, &inlink->outcfg.formats)) < 0) +return ret; + +/* set output video format */ +formats = ff_make_format_list(pix_fmts); +if ((ret = ff_formats_ref(formats, &outlink->incfg.formats)) < 0) +return ret; + +return 0; +} + +static int config_input(AVFilterLink *inlink) +{ +AVFilterContext *ctx = inlink->dst; +LeetContext *s = ctx->priv; + +s->format = inlink->format; +return 0; +} + +static int config_output(AVFilterLink *outlink) +{ +LeetContext *s = outlink->src->priv; + +outlink->format = s->format; + +return 0; +} + +static void avsubtitle_free_ref(void *opaque, uint8_t *data) +{ +avsubtitle_free((AVSubtitle *)data); +} + +static int filter_frame(AVFilterLink *inlink, AVFrame *src_frame) +{ +LeetContext *s = inlink->dst->priv; +AVFilterLink *outlink = inlink->dst->outputs[0]; +AVSubtitle *sub; +int ret; +AVFrame *out; +unsigned int num_rects; +uint8_t *dst; + +outlink->format = inlink->format; + +out = av_frame_alloc(); +if (!out) { +av_frame_free(&src_frame); +return AVERROR(ENOMEM); +} + +out->format = outlink->format; + +if ((ret = av_frame_get_buffer2(out, AVMEDIA_TYPE_SUBTITLE, 0)) < 0) +return ret; + +out->pts= src_frame->pts; +
[FFmpeg-devel] [PATCH v2 8/8] fftools/ffmpeg: Replace sub2video with subtitle frame filtering
Signed-off-by: softworkz --- v2 Update: - Implemented Andreas' suggestions - overlay_subs filter: - removed duplicated code - implemented direct (no pre-conversion) blending of graphical subtitle rects - Supported input formats: - all packed RGB formats (with and without alpha) - yuv420p, yuv422p, yuv444p fftools/ffmpeg.c| 324 +++- fftools/ffmpeg.h| 7 +- fftools/ffmpeg_filter.c | 198 +--- fftools/ffmpeg_hw.c | 2 +- fftools/ffmpeg_opt.c| 3 +- 5 files changed, 330 insertions(+), 204 deletions(-) diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c index b0ce7c7c32..aec8422111 100644 --- a/fftools/ffmpeg.c +++ b/fftools/ffmpeg.c @@ -174,114 +174,99 @@ static void free_input_threads(void); This is a temporary solution until libavfilter gets real subtitles support. */ -static int sub2video_get_blank_frame(InputStream *ist) +static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame); + + +static int get_subtitle_format_from_codecdesc(const AVCodecDescriptor *codec_descriptor) { -int ret; -AVFrame *frame = ist->sub2video.frame; +int format; -av_frame_unref(frame); -ist->sub2video.frame->width = ist->dec_ctx->width ? ist->dec_ctx->width : ist->sub2video.w; -ist->sub2video.frame->height = ist->dec_ctx->height ? ist->dec_ctx->height : ist->sub2video.h; -ist->sub2video.frame->format = AV_PIX_FMT_RGB32; -if ((ret = av_frame_get_buffer(frame, 0)) < 0) -return ret; -memset(frame->data[0], 0, frame->height * frame->linesize[0]); -return 0; +if(codec_descriptor->props & AV_CODEC_PROP_BITMAP_SUB) +format = SUBTITLE_BITMAP; +else if(codec_descriptor->props & AV_CODEC_PROP_TEXT_SUB) +format = SUBTITLE_ASS; +else +format = SUBTITLE_TEXT; + +return format; } -static void sub2video_copy_rect(uint8_t *dst, int dst_linesize, int w, int h, -AVSubtitleRect *r) +static void avsubtitle_free_ref(void *opaque, uint8_t *data) { -uint32_t *pal, *dst2; -uint8_t *src, *src2; -int x, y; +avsubtitle_free((AVSubtitle *)data); +} -if (r->type != SUBTITLE_BITMAP) { -av_log(NULL, AV_LOG_WARNING, "sub2video: non-bitmap subtitle\n"); +static void sub2video_resend_current(InputStream *ist, int64_t heartbeat_pts) +{ +AVFrame *frame; +AVSubtitle *current_sub; +int8_t *dst; +int num_rects, i, ret; +int64_t pts, end_pts, pts_sub; +int format = get_subtitle_format_from_codecdesc(ist->dec_ctx->codec_descriptor); + +/* If we are initializing the system, utilize current heartbeat + PTS as the start time, and show until the following subpicture + is received. Otherwise, utilize the previous subpicture's end time + as the fall-back value. */ +pts = ist->sub2video.end_pts <= 0 ? +heartbeat_pts : ist->sub2video.end_pts; +end_pts = INT64_MAX; + +av_log(ist->dec_ctx, AV_LOG_ERROR, "sub2video_resend_current1: heartbeat_pts: %lld ist->sub2video.end_pts: %lld\n", heartbeat_pts, ist->sub2video.end_pts); + +pts = av_rescale_q(pts * 1000LL, + AV_TIME_BASE_Q, ist->st->time_base); + +frame = av_frame_alloc(); +if (!frame) { +av_log(ist->dec_ctx, AV_LOG_ERROR, "Unable to alloc frame (out of memory).\n"); return; } -if (r->x < 0 || r->x + r->w > w || r->y < 0 || r->y + r->h > h) { -av_log(NULL, AV_LOG_WARNING, "sub2video: rectangle (%d %d %d %d) overflowing %d %d\n", -r->x, r->y, r->w, r->h, w, h -); + +frame->format = get_subtitle_format_from_codecdesc(ist->dec_ctx->codec_descriptor); + +if ((ret = av_frame_get_buffer2(frame, AVMEDIA_TYPE_SUBTITLE, 0)) < 0) { +av_log(ist->dec_ctx, AV_LOG_ERROR, "Error (av_frame_get_buffer): %d.\n", ret); return; } -dst += r->y * dst_linesize + r->x * 4; -src = r->data[0]; -pal = (uint32_t *)r->data[1]; -for (y = 0; y < r->h; y++) { -dst2 = (uint32_t *)dst; -src2 = src; -for (x = 0; x < r->w; x++) -*(dst2++) = pal[*(src2++)]; -dst += dst_linesize; -src += r->linesize[0]; +frame->width = ist->sub2video.w; +frame->height = ist->sub2video.h; + +if (ist->sub2video.current_subtitle) { +frame->buf[0] = av_buffer_ref(ist->sub2video.current_subtitle); +frame->data[0] = ist->sub2video.current_subtitle->data; +} +else { +AVBufferRef *empty_sub_buffer; +AVSubtitle *empty_sub = av_mallocz(sizeof(*empty_sub)); +empty_sub->format = format; +empty_sub->num_rects = 0; +empty_sub->pts = av_rescale_q(pts, ist->st->time_base, AV_TIME_BASE_Q); +empty_sub->end_display_time = 1000; +empty_sub_buffer = av_buffer_create((uint8_t*)empty_sub, sizeof(*empty_sub), avsubtitle_free_ref, NULL, AV_BU
Re: [FFmpeg-devel] [PATCH v1 1/1] avcodec/vble: Return value check for init_get_bits
There are some other checks in init_get_bits function that make the function return AVERROR_INVALIDDATA. So it is essential to check the return value. Line 629 in libavcodec/get_bits.h function init_get_bits_xe: if (bit_size >= INT_MAX - FFMAX(7, AV_INPUT_BUFFER_PADDING_SIZE*8) || bit_size < 0 || !buffer) { bit_size= 0; buffer = NULL; ret = AVERROR_INVALIDDATA; } ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal Sub2Video Code
Soft Works (12021-08-30): > v2 Update: > > - Implemented Andreas' suggestions > - overlay_subs filter: > - removed duplicated code > - implemented direct (no pre-conversion) blending of graphical > subtitle rects > - Supported input formats: > - all packed RGB formats (with and without alpha) > - yuv420p, yuv422p, yuv444p > > > This patchset is about introducing filtering support for subtitles. Rejected. As I have explained to you, no subtitle support can be accepted until the negotiation process has been refactored to make adding a third media type less of a nightmare. From your questions and remarks in other discussions, it seems obvious to me that you are missing something central about the format negotiation. That makes discussion with you at cross-purpose, a waste of your efforts and ours both, and that makes your patches unacceptable. So, if you really want to contribute, I strongly suggest you stop insisting on what you already wrote, and take the time to read the code in depth to understand how it works and how it needs to be made better. Then you can prove to yourself your understanding by submitting FATE tests for the parts that are not yet covered, as I suggested recently. -- Nicolas George signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 10/10] lavfi/vf_scale: pass the thread count to the scaler
Quoting Michael Niedermayer (2021-08-29 22:22:04) > On Sun, Aug 29, 2021 at 06:48:36PM +0200, Anton Khirnov wrote: > > Quoting Michael Niedermayer (2021-08-09 22:30:06) > > > On Sun, Aug 08, 2021 at 07:29:41PM +0200, Anton Khirnov wrote: > > > > --- > > > > libavfilter/vf_scale.c | 1 + > > > > 1 file changed, 1 insertion(+) > > > > > > > > diff --git a/libavfilter/vf_scale.c b/libavfilter/vf_scale.c > > > > index b62fb37d4b..14e202bf77 100644 > > > > --- a/libavfilter/vf_scale.c > > > > +++ b/libavfilter/vf_scale.c > > > > @@ -542,6 +542,7 @@ static int config_props(AVFilterLink *outlink) > > > > av_opt_set_int(*s, "sws_flags", scale->flags, 0); > > > > av_opt_set_int(*s, "param0", scale->param[0], 0); > > > > av_opt_set_int(*s, "param1", scale->param[1], 0); > > > > +av_opt_set_int(*s, "threads", > > > > ff_filter_get_nb_threads(ctx), 0); > > > > if (scale->in_range != AVCOL_RANGE_UNSPECIFIED) > > > > av_opt_set_int(*s, "src_range", > > > > scale->in_range == AVCOL_RANGE_JPEG, 0); > > > > -- > > > > 2.30.2 > > > > > > breaks: > > > ./ffmpeg -i ~/tickets/1012/IV50_random_points.avi -threads 5 -y > > > file1012.avi > > > > > > it contains horizontal bright green lines > > > > Should be fixed with the updated patches I sent just now. > > > > > That said, I think the special 410->420 scaler should be dropped - its > > output fundamentally depends on how the slices are submitted. Given that > > hmm, iam not sure i understand the issue you describe. > Is this because the 410->420 scaler interpolates the chroma and to do so > it has to handle the image borders differently? It handles _slice_ borders differently, not image borders. So the result depends on how the image is partitioned into slices. I am not familiar with the generic scaler code, but it seems independent of this partitioning, otherwise the threaded scaling tests would fail. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal Sub2Video Code
> -Original Message- > From: ffmpeg-devel On Behalf Of > Nicolas George > Sent: Monday, 30 August 2021 10:24 > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal > Sub2Video Code > > Soft Works (12021-08-30): > > v2 Update: > > > > - Implemented Andreas' suggestions > > - overlay_subs filter: > > - removed duplicated code > > - implemented direct (no pre-conversion) blending of graphical > > subtitle rects > > - Supported input formats: > > - all packed RGB formats (with and without alpha) > > - yuv420p, yuv422p, yuv444p > > > > > > This patchset is about introducing filtering support for subtitles. > > Rejected. > > As I have explained to you, no subtitle support can be accepted until > the negotiation process has been refactored to make adding a third > media > type less of a nightmare. > > From your questions and remarks in other discussions, it seems > obvious > to me that you are missing something central about the format > negotiation. That makes discussion with you at cross-purpose, a waste > of > your efforts and ours both, and that makes your patches unacceptable. You didn't reply to my questions, neither did you explain your plans. There are three - or actually just two relevant - subtitle formats for internal handling: - AVSubtitleType.SUBTITLE_BITMAP - AVSubtitleType.SUBTITLE_ASS with regards to subtitles, there's no need for negotiation: It's either one or the other and as these are fundamentally different, there's also no auto-conversion that needs to be handled. Either a filter (or encoder) supports it or it doesn't. Your proposals and plans might make sense for audio and video formats, but I don't see how this would affect subtitle filtering. If you want to REJECT my submission, please be SPECIFIC instead of responding with your usual set of personal insults. softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v1] avcodec/av1dec: check if hwaccel is specificed
On Mon, Aug 30, 2021 at 3:56 AM Wang, Fei W wrote: > > If so, the only way to fix is keep previous check with avctx->hwaccel > but change its log context like "The AV1 decoder requires a hw acceleration > to be specified or the specified hw doesn't support AV1 decoding." > That sounds right, you can change the message of course, but I don't think you can necessarily change the logic. > Btw, just for curious, which hwaccel are you using that doesn't set up > the hw_device_ctx? > I'm using DXVA2 that way, but also VA-API, VDPAU and VideoToolbox are all setup to use the old hwaccel initialization. - Hendrik ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal Sub2Video Code
Soft Works (12021-08-30): > You didn't reply to my questions, neither did you explain your plans. I have given you pointers to the answers. I have documented my plans. Are you asking me to spend time and effort explaining again to you specifically? Are you asking me to be your personal teacher? I have all the students I need, thank you very much. I have said all I have to say to you. Until you prove you understand the code as it is and as it needs to be for long-term maintenance, goodbye. -- Nicolas George signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal Sub2Video Code
> -Original Message- > From: ffmpeg-devel On Behalf Of > Nicolas George > Sent: Monday, 30 August 2021 10:48 > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: Re: [FFmpeg-devel] [PATCH v2 0/8] A New Take on the Immortal > Sub2Video Code > > Soft Works (12021-08-30): > > You didn't reply to my questions, neither did you explain your > plans. > > I have given you pointers to the answers. I have documented my plans. Where? > Are you asking me to spend time and effort explaining again to you > specifically? Are you asking me to be your personal teacher? I have > all the students I need, thank you very much. > > I have said all I have to say to you. Until you prove you understand > the > code as it is and as it needs to be for long-term maintenance, > goodbye. That's a level I am not willing to go down to. Everybody can read and make his own judgement. softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 2/2] FATE: add tests for v360/ssim360 filters
Quoting Derek Buitenhuis (2021-08-09 15:43:02) > On 8/9/2021 11:29 AM, Anton Khirnov wrote: > > diff --git a/libavfilter/Makefile b/libavfilter/Makefile > > index b0348ccfa3..27dd0c4b47 100644 > > --- a/libavfilter/Makefile > > +++ b/libavfilter/Makefile > > @@ -559,7 +559,8 @@ SKIPHEADERS-$(CONFIG_VULKAN) += vulkan.h > > > > OBJS-$(CONFIG_LIBGLSLANG)+= glslang.o > > > > -TOOLS = graph2dot > > +TOOLS = graph2dot \ > > +spherical_compare > > Is there a reason it needs a new tool rather than ffmpeg.c? I do not believe every single testing-only feature needs to be stuffed into ffmpeg.c. It's big and complicated enough already, while tests should ideally be simple and test just the thing they are supposed to test. > > > +frame 0 > > +lavfi.ssim360.Y=0.97 > > +lavfi.ssim360.U=1.00 > > +lavfi.ssim360.V=1.00 > > +lavfi.ssim360.All=1.00 > > +lavfi.ssim360.dB=25.19 > > Is it wise to do a non-fuzzy compare of floats? One could hope that truncating to two decimal places might not break, but I suppose it's still possible. Guess I could change the tool to parse the strings and compare them properly. -- Anton Khirnov ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 10/10] lavfi/vf_scale: pass the thread count to the scaler
On Mon, Aug 30, 2021 at 10:34:18AM +0200, Anton Khirnov wrote: > Quoting Michael Niedermayer (2021-08-29 22:22:04) > > On Sun, Aug 29, 2021 at 06:48:36PM +0200, Anton Khirnov wrote: > > > Quoting Michael Niedermayer (2021-08-09 22:30:06) > > > > On Sun, Aug 08, 2021 at 07:29:41PM +0200, Anton Khirnov wrote: > > > > > --- > > > > > libavfilter/vf_scale.c | 1 + > > > > > 1 file changed, 1 insertion(+) > > > > > > > > > > diff --git a/libavfilter/vf_scale.c b/libavfilter/vf_scale.c > > > > > index b62fb37d4b..14e202bf77 100644 > > > > > --- a/libavfilter/vf_scale.c > > > > > +++ b/libavfilter/vf_scale.c > > > > > @@ -542,6 +542,7 @@ static int config_props(AVFilterLink *outlink) > > > > > av_opt_set_int(*s, "sws_flags", scale->flags, 0); > > > > > av_opt_set_int(*s, "param0", scale->param[0], 0); > > > > > av_opt_set_int(*s, "param1", scale->param[1], 0); > > > > > +av_opt_set_int(*s, "threads", > > > > > ff_filter_get_nb_threads(ctx), 0); > > > > > if (scale->in_range != AVCOL_RANGE_UNSPECIFIED) > > > > > av_opt_set_int(*s, "src_range", > > > > > scale->in_range == AVCOL_RANGE_JPEG, > > > > > 0); > > > > > -- > > > > > 2.30.2 > > > > > > > > breaks: > > > > ./ffmpeg -i ~/tickets/1012/IV50_random_points.avi -threads 5 -y > > > > file1012.avi > > > > > > > > it contains horizontal bright green lines > > > > > > Should be fixed with the updated patches I sent just now. > > > > > > > > That said, I think the special 410->420 scaler should be dropped - its > > > output fundamentally depends on how the slices are submitted. Given that > > > > hmm, iam not sure i understand the issue you describe. > > Is this because the 410->420 scaler interpolates the chroma and to do so > > it has to handle the image borders differently? > > It handles _slice_ borders differently, not image borders. So the result > depends on how the image is partitioned into slices. > > I am not familiar with the generic scaler code, but it seems independent > of this partitioning, otherwise the threaded scaling tests would fail. the generic scaler simply stores the data originating from the previous slice (generally after the horizontal scaler) the 410->420 one probably should * store the one chroma line too somewhere * initialize it to the first image line * simplify all the 410->420 code so it always uses a pointer to the previous line either from the buffer or if available straight from the input image It seems not worth for just 410->420, and i agree but the same could be used for 420->444 and others which would have the same problem if one wanted to do higher quality chroma interpolation in the unscaled special converters thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The misfortune of the wise is better than the prosperity of the fool. -- Epicurus signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v2 0/2] Support for stream dispositions in MP4
First patch implements the CMAF specified way of flagging what in FFmpeg are are called stream dispositions. Other identifiers such as HTML media track kinds are allowed, but if there is a DASH identifier for something, it should be utilized in stead. Second patch is a compatibility patch for one of the vendors that supports this feature. If this is considered a too bad of a hack, we can drop it from being upstreamed, but at least I wanted to bring it up :) . The compatibility mode is not the default, so it should also not proliferate such behavior. Compared to first version: * Missed unused variables causing additional warnings were removed. Best regards, Jan Jan Ekström (2): avformat/{isom,mov,movenc}: add support for CMAF DASH roles avformat/{isom,movenc}: add kind box compatibility mode for Unified Origin libavformat/isom.c| 32 libavformat/isom.h| 18 + libavformat/mov.c | 67 +++ libavformat/movenc.c | 57 + libavformat/movenc.h | 2 + tests/fate/mov.mak| 17 .../ref/fate/mov-mp4-disposition-mpegts-remux | 81 +++ ...p4-disposition-unified-origin-mpegts-remux | 81 +++ 8 files changed, 355 insertions(+) create mode 100644 tests/ref/fate/mov-mp4-disposition-mpegts-remux create mode 100644 tests/ref/fate/mov-mp4-disposition-unified-origin-mpegts-remux -- 2.31.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v2 1/2] avformat/{isom, mov, movenc}: add support for CMAF DASH roles
From: Jan Ekström This information is coded in a standard MP4 KindBox and utilizes the scheme and values as per the DASH role scheme defined in MPEG-DASH. Other schemes are technically allowed, but where multiple schemes define the same concepts, the DASH scheme should be utilized. Such flagging is additionally utilized by the DASH-IF CMAF ingest specification, enabling an encoder to inform the following component of the roles of the incoming media streams. A test is added for this functionality in a similar manner to the matroska test. Signed-off-by: Jan Ekström --- libavformat/isom.c| 19 + libavformat/isom.h| 12 +++ libavformat/mov.c | 67 +++ libavformat/movenc.c | 51 tests/fate/mov.mak| 9 +++ .../ref/fate/mov-mp4-disposition-mpegts-remux | 81 +++ 6 files changed, 239 insertions(+) create mode 100644 tests/ref/fate/mov-mp4-disposition-mpegts-remux diff --git a/libavformat/isom.c b/libavformat/isom.c index 4df5440023..300ba927c2 100644 --- a/libavformat/isom.c +++ b/libavformat/isom.c @@ -430,3 +430,22 @@ void ff_mov_write_chan(AVIOContext *pb, int64_t channel_layout) } avio_wb32(pb, 0); // mNumberChannelDescriptions } + +static const struct MP4TrackKindValueMapping dash_role_map[] = { +{ AV_DISPOSITION_HEARING_IMPAIRED|AV_DISPOSITION_CAPTIONS, +"caption" }, +{ AV_DISPOSITION_COMMENT, +"commentary" }, +{ AV_DISPOSITION_VISUAL_IMPAIRED|AV_DISPOSITION_DESCRIPTIONS, +"description" }, +{ AV_DISPOSITION_DUB, +"dub" }, +{ AV_DISPOSITION_FORCED, +"forced-subtitle" }, +{ 0, NULL } +}; + +const struct MP4TrackKindMapping ff_mov_track_kind_table[] = { +{ "urn:mpeg:dash:role:2011", dash_role_map }, +{ 0, NULL } +}; diff --git a/libavformat/isom.h b/libavformat/isom.h index 34a58c79b7..c62fcf2bfe 100644 --- a/libavformat/isom.h +++ b/libavformat/isom.h @@ -390,4 +390,16 @@ static inline enum AVCodecID ff_mov_get_lpcm_codec_id(int bps, int flags) #define MOV_ISMV_TTML_TAG MKTAG('d', 'f', 'x', 'p') #define MOV_MP4_TTML_TAG MKTAG('s', 't', 'p', 'p') +struct MP4TrackKindValueMapping { +int disposition; +const char *value; +}; + +struct MP4TrackKindMapping { +const char *scheme_uri; +const struct MP4TrackKindValueMapping *value_maps; +}; + +extern const struct MP4TrackKindMapping ff_mov_track_kind_table[]; + #endif /* AVFORMAT_ISOM_H */ diff --git a/libavformat/mov.c b/libavformat/mov.c index c5583e07c7..4330736fa3 100644 --- a/libavformat/mov.c +++ b/libavformat/mov.c @@ -28,6 +28,7 @@ #include #include "libavutil/attributes.h" +#include "libavutil/avstring.h" #include "libavutil/channel_layout.h" #include "libavutil/internal.h" #include "libavutil/intreadwrite.h" @@ -6835,6 +6836,71 @@ static int mov_read_dvcc_dvvc(MOVContext *c, AVIOContext *pb, MOVAtom atom) return 0; } +static int mov_read_kind(MOVContext *c, AVIOContext *pb, MOVAtom atom) +{ +AVFormatContext *ctx = c->fc; +AVStream *st = NULL; +char scheme_str[1024] = { 0 }, value_str[1024] = { 0 }; +int scheme_str_len = 0, value_str_len = 0; +int version, flags; +int64_t size = atom.size; + +if (atom.size < 6) +// 4 bytes for version + flags, 2x 1 byte for null +return AVERROR_INVALIDDATA; + +if (c->fc->nb_streams < 1) +return 0; +st = c->fc->streams[c->fc->nb_streams-1]; + +version = avio_r8(pb); +flags = avio_rb24(pb); +size -= 4; + +if (version != 0 || flags != 0) { +av_log(ctx, AV_LOG_ERROR, + "Unsupported 'kind' box with version %d, flags: %x", + version, flags); +return AVERROR_INVALIDDATA; +} + +scheme_str_len = avio_get_str(pb, size, scheme_str, sizeof(scheme_str)); +if (scheme_str_len < 0) +return AVERROR_INVALIDDATA; + +if (scheme_str_len == size) +// we need to have another string, even if nullptr +return AVERROR_INVALIDDATA; + +size -= scheme_str_len; + +value_str_len = avio_get_str(pb, size, value_str, sizeof(value_str)); +if (value_str_len < 0) +return AVERROR_INVALIDDATA; + +av_log(ctx, AV_LOG_TRACE, + "%s stream %d KindBox(scheme: %s, value: %s)\n", + av_get_media_type_string(st->codecpar->codec_type), + st->index, + scheme_str, value_str); + +for (int i = 0; ff_mov_track_kind_table[i].scheme_uri; i++) { +const struct MP4TrackKindMapping map = ff_mov_track_kind_table[i]; +if (!av_strstart(scheme_str, map.scheme_uri, NULL)) +continue; + +for (int j = 0; map.value_maps[j].disposition; j++) { +const struct MP4TrackKindValueMapping value_map = map.value_maps[j]; +if (!av_strstart(value_str, value_map.value, NULL)) +
[FFmpeg-devel] [PATCH v2 2/2] avformat/{isom, movenc}: add kind box compatibility mode for Unified Origin
From: Jan Ekström Unfortunately the current production versions of this software do not 100% adhere to the CMAF specification, and have decided to utilize the HTML5 media track identifier for audio descriptions. This way the default mode of operation is according to the CMAF specification, but it is also possible to output streams with which this piece of software is capable of interoperating with. Signed-off-by: Jan Ekström --- libavformat/isom.c| 23 -- libavformat/isom.h| 6 ++ libavformat/movenc.c | 12 ++- libavformat/movenc.h | 2 + tests/fate/mov.mak| 8 ++ ...p4-disposition-unified-origin-mpegts-remux | 81 +++ 6 files changed, 124 insertions(+), 8 deletions(-) create mode 100644 tests/ref/fate/mov-mp4-disposition-unified-origin-mpegts-remux diff --git a/libavformat/isom.c b/libavformat/isom.c index 300ba927c2..fb8ad3d824 100644 --- a/libavformat/isom.c +++ b/libavformat/isom.c @@ -433,19 +433,32 @@ void ff_mov_write_chan(AVIOContext *pb, int64_t channel_layout) static const struct MP4TrackKindValueMapping dash_role_map[] = { { AV_DISPOSITION_HEARING_IMPAIRED|AV_DISPOSITION_CAPTIONS, -"caption" }, +"caption", +KindWritingModeCMAF|KindWritingModeUnifiedOrigin }, { AV_DISPOSITION_COMMENT, -"commentary" }, +"commentary", +KindWritingModeCMAF|KindWritingModeUnifiedOrigin }, { AV_DISPOSITION_VISUAL_IMPAIRED|AV_DISPOSITION_DESCRIPTIONS, -"description" }, +"description", +KindWritingModeCMAF }, { AV_DISPOSITION_DUB, -"dub" }, +"dub", +KindWritingModeCMAF|KindWritingModeUnifiedOrigin }, { AV_DISPOSITION_FORCED, -"forced-subtitle" }, +"forced-subtitle", +KindWritingModeCMAF|KindWritingModeUnifiedOrigin }, +{ 0, NULL } +}; + +static const struct MP4TrackKindValueMapping html_kind_map[] = { +{ AV_DISPOSITION_VISUAL_IMPAIRED|AV_DISPOSITION_DESCRIPTIONS, +"main-desc", + KindWritingModeUnifiedOrigin }, { 0, NULL } }; const struct MP4TrackKindMapping ff_mov_track_kind_table[] = { { "urn:mpeg:dash:role:2011", dash_role_map }, +{ "about:html-kind", html_kind_map }, { 0, NULL } }; diff --git a/libavformat/isom.h b/libavformat/isom.h index c62fcf2bfe..1252fc6603 100644 --- a/libavformat/isom.h +++ b/libavformat/isom.h @@ -390,9 +390,15 @@ static inline enum AVCodecID ff_mov_get_lpcm_codec_id(int bps, int flags) #define MOV_ISMV_TTML_TAG MKTAG('d', 'f', 'x', 'p') #define MOV_MP4_TTML_TAG MKTAG('s', 't', 'p', 'p') +enum MP4TrackKindWritingMode { +KindWritingModeCMAF = (1 << 0), +KindWritingModeUnifiedOrigin = (1 << 1), +}; + struct MP4TrackKindValueMapping { int disposition; const char *value; +uint32_twriting_modes; }; struct MP4TrackKindMapping { diff --git a/libavformat/movenc.c b/libavformat/movenc.c index 4070fc9ef7..baaae7d3ad 100644 --- a/libavformat/movenc.c +++ b/libavformat/movenc.c @@ -111,6 +111,9 @@ static const AVOption options[] = { { "pts", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = MOV_PRFT_SRC_PTS}, 0, 0, AV_OPT_FLAG_ENCODING_PARAM, "prft"}, { "empty_hdlr_name", "write zero-length name string in hdlr atoms within mdia and minf atoms", offsetof(MOVMuxContext, empty_hdlr_name), AV_OPT_TYPE_BOOL, {.i64 = 0}, 0, 1, AV_OPT_FLAG_ENCODING_PARAM}, { "movie_timescale", "set movie timescale", offsetof(MOVMuxContext, movie_timescale), AV_OPT_TYPE_INT, {.i64 = MOV_TIMESCALE}, 1, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM}, +{ "kind_writing_mode", "set kind box writing mode", offsetof(MOVMuxContext, kind_writing_mode), AV_OPT_TYPE_INT, {.i64 = KindWritingModeCMAF}, KindWritingModeCMAF, KindWritingModeUnifiedOrigin, AV_OPT_FLAG_ENCODING_PARAM, "kind_writing_mode"}, +{ "cmaf", "CMAF writing mode", 0, AV_OPT_TYPE_CONST, {.i64 = KindWritingModeCMAF}, INT_MIN, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, "kind_writing_mode"}, +{ "unified_origin", "Compatibility mode for Unified Origin (all DASH except for audio description)", 0, AV_OPT_TYPE_CONST, {.i64 = KindWritingModeUnifiedOrigin}, INT_MIN, INT_MAX, AV_OPT_FLAG_ENCODING_PARAM, "kind_writing_mode"}, { NULL }, }; @@ -3355,7 +3358,8 @@ static int mov_write_track_kind(AVIOContext *pb, const char *scheme_uri, return update_size(pb, pos); } -static int mov_write_track_kinds(AVIOContext *pb, AVStream *st) +static int mov_write_track_kinds(AVIOContext *pb, AVStream *st, + enum MP4TrackKindWritingMode mode) { int ret = AVERROR_BUG; @@ -3364,7 +3368,8 @@ static int mov_write_track_kinds(AVIOContext *pb, AVStream *st) for (int j = 0; map.value_maps[j].disposition; j++) { const struct MP4TrackKindValueMapping value_map = map.value_maps[j]; -
[FFmpeg-devel] [PATCH] webp: fix transforms after a palette with pixel packing.
When a color indexing transform with 16 or fewer colors is used, WebP uses "pixel packing", i.e. storing several pixels in one byte, which virtually reduces the width of the image (see WebPContext's reduced_width field). This reduced_width should always be used when reading and applying subsequent transforms. Updated patch with added fate test. The source image dual_transform.webp can be downloaded by cloning https://chromium.googlesource.com/webm/libwebp-test-data/ Fixes: 9368 --- libavcodec/webp.c | 34 ++- tests/fate/image.mak | 3 ++ .../fate/webp-rgb-lossless-palette-predictor | 6 3 files changed, 27 insertions(+), 16 deletions(-) create mode 100644 tests/ref/fate/webp-rgb-lossless-palette-predictor diff --git a/libavcodec/webp.c b/libavcodec/webp.c index 3efd4438d9..e4c67adc3a 100644 --- a/libavcodec/webp.c +++ b/libavcodec/webp.c @@ -181,7 +181,10 @@ typedef struct ImageContext { uint32_t *color_cache; /* color cache data */ int nb_huffman_groups; /* number of huffman groups */ HuffReader *huffman_groups; /* reader for each huffman group */ -int size_reduction; /* relative size compared to primary image, log2 */ +/* relative size compared to primary image, log2. + * for IMAGE_ROLE_COLOR_INDEXING with <= 16 colors, this is log2 of the + * number of pixels per byte in the primary image (pixel packing) */ +int size_reduction; int is_alpha_primary; } ImageContext; @@ -205,7 +208,9 @@ typedef struct WebPContext { int nb_transforms; /* number of transforms */ enum TransformType transforms[4]; /* transformations used in the image, in order */ -int reduced_width; /* reduced width for index image, if applicable */ +/* reduced width when using a color indexing transform with <= 16 colors (pixel packing) + * before pixels are unpacked, or same as width otherwise. */ +int reduced_width; int nb_huffman_groups; /* number of huffman groups in the primary image */ ImageContext image[IMAGE_ROLE_NB]; /* image context for each role */ } WebPContext; @@ -425,13 +430,9 @@ static int decode_entropy_coded_image(WebPContext *s, enum ImageRole role, static int decode_entropy_image(WebPContext *s) { ImageContext *img; -int ret, block_bits, width, blocks_w, blocks_h, x, y, max; +int ret, block_bits, blocks_w, blocks_h, x, y, max; -width = s->width; -if (s->reduced_width > 0) -width = s->reduced_width; - -PARSE_BLOCK_SIZE(width, s->height); +PARSE_BLOCK_SIZE(s->reduced_width, s->height); ret = decode_entropy_coded_image(s, IMAGE_ROLE_ENTROPY, blocks_w, blocks_h); if (ret < 0) @@ -460,7 +461,7 @@ static int parse_transform_predictor(WebPContext *s) { int block_bits, blocks_w, blocks_h, ret; -PARSE_BLOCK_SIZE(s->width, s->height); +PARSE_BLOCK_SIZE(s->reduced_width, s->height); ret = decode_entropy_coded_image(s, IMAGE_ROLE_PREDICTOR, blocks_w, blocks_h); @@ -476,7 +477,7 @@ static int parse_transform_color(WebPContext *s) { int block_bits, blocks_w, blocks_h, ret; -PARSE_BLOCK_SIZE(s->width, s->height); +PARSE_BLOCK_SIZE(s->reduced_width, s->height); ret = decode_entropy_coded_image(s, IMAGE_ROLE_COLOR_TRANSFORM, blocks_w, blocks_h); @@ -620,7 +621,7 @@ static int decode_entropy_coded_image(WebPContext *s, enum ImageRole role, } width = img->frame->width; -if (role == IMAGE_ROLE_ARGB && s->reduced_width > 0) +if (role == IMAGE_ROLE_ARGB) width = s->reduced_width; x = 0; y = 0; @@ -925,7 +926,7 @@ static int apply_predictor_transform(WebPContext *s) int x, y; for (y = 0; y < img->frame->height; y++) { -for (x = 0; x < img->frame->width; x++) { +for (x = 0; x < s->reduced_width; x++) { int tx = x >> pimg->size_reduction; int ty = y >> pimg->size_reduction; enum PredictionMode m = GET_PIXEL_COMP(pimg->frame, tx, ty, 2); @@ -965,7 +966,7 @@ static int apply_color_transform(WebPContext *s) cimg = &s->image[IMAGE_ROLE_COLOR_TRANSFORM]; for (y = 0; y < img->frame->height; y++) { -for (x = 0; x < img->frame->width; x++) { +for (x = 0; x < s->reduced_width; x++) { cx = x >> cimg->size_reduction; cy = y >> cimg->size_reduction; cp = GET_PIXEL(cimg->frame, cx, cy); @@ -985,7 +986,7 @@ static int apply_subtract_green_transform(WebPContext *s) ImageContext *img = &s->image[IMAGE_ROLE_ARGB]; for (y = 0; y < img->frame->height; y++) { -for (x = 0; x < img->frame->width; x++) { +for (x = 0; x < s->reduced_width; x++) { uint8_t *p = GET_PIXEL(img->frame, x, y); p[1] += p[2];
Re: [FFmpeg-devel] [PATCH 10/10] lavfi/vf_scale: pass the thread count to the scaler
On Mon, Aug 30, 2021 at 11:38:53AM +0200, Michael Niedermayer wrote: > On Mon, Aug 30, 2021 at 10:34:18AM +0200, Anton Khirnov wrote: > > Quoting Michael Niedermayer (2021-08-29 22:22:04) > > > On Sun, Aug 29, 2021 at 06:48:36PM +0200, Anton Khirnov wrote: > > > > Quoting Michael Niedermayer (2021-08-09 22:30:06) > > > > > On Sun, Aug 08, 2021 at 07:29:41PM +0200, Anton Khirnov wrote: > > > > > > --- > > > > > > libavfilter/vf_scale.c | 1 + > > > > > > 1 file changed, 1 insertion(+) > > > > > > > > > > > > diff --git a/libavfilter/vf_scale.c b/libavfilter/vf_scale.c > > > > > > index b62fb37d4b..14e202bf77 100644 > > > > > > --- a/libavfilter/vf_scale.c > > > > > > +++ b/libavfilter/vf_scale.c > > > > > > @@ -542,6 +542,7 @@ static int config_props(AVFilterLink *outlink) > > > > > > av_opt_set_int(*s, "sws_flags", scale->flags, 0); > > > > > > av_opt_set_int(*s, "param0", scale->param[0], 0); > > > > > > av_opt_set_int(*s, "param1", scale->param[1], 0); > > > > > > +av_opt_set_int(*s, "threads", > > > > > > ff_filter_get_nb_threads(ctx), 0); > > > > > > if (scale->in_range != AVCOL_RANGE_UNSPECIFIED) > > > > > > av_opt_set_int(*s, "src_range", > > > > > > scale->in_range == > > > > > > AVCOL_RANGE_JPEG, 0); > > > > > > -- > > > > > > 2.30.2 > > > > > > > > > > breaks: > > > > > ./ffmpeg -i ~/tickets/1012/IV50_random_points.avi -threads 5 -y > > > > > file1012.avi > > > > > > > > > > it contains horizontal bright green lines > > > > > > > > Should be fixed with the updated patches I sent just now. > > > > > > > > > > > That said, I think the special 410->420 scaler should be dropped - its > > > > output fundamentally depends on how the slices are submitted. Given that > > > > > > hmm, iam not sure i understand the issue you describe. > > > Is this because the 410->420 scaler interpolates the chroma and to do so > > > it has to handle the image borders differently? > > > > It handles _slice_ borders differently, not image borders. So the result > > depends on how the image is partitioned into slices. > > > > > I am not familiar with the generic scaler code, but it seems independent > > of this partitioning, otherwise the threaded scaling tests would fail. > > the generic scaler simply stores the data originating from the previous slice > (generally after the horizontal scaler) > > the 410->420 one probably should > * store the one chroma line too somewhere > * initialize it to the first image line > * simplify all the 410->420 code so it always uses a pointer to the previous > line either from the buffer or if available straight from the input image > > It seems not worth for just 410->420, and i agree but the same could > be used for 420->444 and others which would have the same problem if one > wanted to do higher quality chroma interpolation in the unscaled special > converters Just to clarify, this is meant as a path forward for the bug with chroma interpolation in the special converters which do or might want to use chroma interpolation. Its not a review comment to the patchset(s) thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB If the United States is serious about tackling the national security threats related to an insecure 5G network, it needs to rethink the extent to which it values corporate profits and government espionage over security.-Bruce Schneier signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 1/3] Revert "fftools/ffmpeg_filter: fix the flags parsing for scaler"
On Sat, Aug 28, 2021 at 4:45 AM Michael Niedermayer wrote: > On Sat, Aug 07, 2021 at 09:33:27PM +0200, Michael Niedermayer wrote: > > On Sat, Aug 07, 2021 at 06:15:05PM +0800, Linjie Fu wrote: > > > From: Linjie Fu > > > > > > This reverts commit b3a0548a981db52911dd34d9de254c4fee0a8f79. > > > > LGTM > > please apply this unless you intend to fix it in another way > Rebased and applied, thx. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 08/25] avformat/utils: Remove always-false check
Andreas Rheinhardt: > AVFormatContext.internal is already allocated by > avformat_alloc_context() on success; and on error, > avformat_alloc_context() cleans up manually without > avformat_free_context(). > > Signed-off-by: Andreas Rheinhardt > --- > libavformat/utils.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/libavformat/utils.c b/libavformat/utils.c > index 4caa3017fb..7d7fd16257 100644 > --- a/libavformat/utils.c > +++ b/libavformat/utils.c > @@ -1742,8 +1742,6 @@ return_packet: > /* XXX: suppress the packet queue */ > static void flush_packet_queue(AVFormatContext *s) > { > -if (!s->internal) > -return; > avpriv_packet_list_free(&s->internal->parse_queue, > &s->internal->parse_queue_end); > avpriv_packet_list_free(&s->internal->packet_buffer, > &s->internal->packet_buffer_end); > avpriv_packet_list_free(&s->internal->raw_packet_buffer, > &s->internal->raw_packet_buffer_end); > Will apply patches 8-17 (with the reference to FFStream removed from the commit message of #14) tomorrow unless there are objections. - Andreas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [v7 PATCH 1/1] libavfilter/vf_grayworld: Add gray world filter
Thats, great thanks. How did you generate the NANs? I don't believe under normal operation (i.e. with data converted from YUV video to RGB) we would expect to see negative values for the RGB data, so I guess it was from some specific input tests? Regards, Paul On Sun, Aug 29, 2021 at 12:38 PM Paul B Mahol wrote: > > Fixed some minor issues and major ones (NANs poping out) and applied. > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] Patchwork hook for bad commit messages
On Mon, 16. Aug 20:42, Michael Niedermayer wrote: > On Sun, Aug 15, 2021 at 01:53:23PM -0400, Andriy Gelman wrote: > > On Sun, 15. Aug 17:15, Michael Niedermayer wrote: > > > On Sun, Aug 15, 2021 at 08:18:17AM -0400, Andriy Gelman wrote: > > > > On Sun, 15. Aug 11:17, Nicolas George wrote: > > > > > Is it possible to add hooks in Patchwork to warn people automatically > > > > > when their commit message does not match standards? > > > > > > > > > > If it is possible, I volunteer to write it. > > > > > > > > > > > > > Yes, nice idea. > > > > > > > > If the commit message is invalid I could add a warning similar to: > > > > https://patchwork.ffmpeg.org/project/ffmpeg/patch/20210809102919.387-1-an...@khirnov.net/ > > > > and trigger an automated email to the author. > > > > > > > > Feel free to send me the parsing part (shell or python is ok) and I'll > > > > add it > > > > in. > > > > I'll also aim to put all this code in a repo somewhere so that others > > > > can add to it. > > > > > > If you want i can add a git repo to https://git.ffmpeg.org/gitweb > > > i just need to know name, who should have write access > > > (@ffmpeg_developers is > > > possible, if its something else then you will have to maintain the list of > > > people who have access), where commitlog mails should go. But you can > > > also put it on github > > > that said if anyone else wants basic git repos for any ffmpeg parts which > > > arent in teh main git, we can add it to git.ffmpeg.org too > > > > > > thx > > > > Thanks, a repo on gitweb with write access for @ffmpeg_developers sounds > > good. > > ok, we need a name for it, and should commit mails be sent to ffmpeg-devel ? Michael set up the repo here: https://git.ffmpeg.org/gitweb/patchwork_job_runner.git I've pushed the python code that can be used to setup a custom CI job runner for patchwork. Attached is a README patch if anyone wants to comment. -- Andriy >From 9481394dc834f89c4941c1b0dd57b27d9d9a7f5f Mon Sep 17 00:00:00 2001 From: Andriy Gelman Date: Mon, 30 Aug 2021 13:21:21 -0400 Subject: [PATCH] Add a README --- README | 57 + 1 file changed, 57 insertions(+) create mode 100644 README diff --git a/README b/README new file mode 100644 index 000..597bceb --- /dev/null +++ b/README @@ -0,0 +1,57 @@ +This repo has python helper to setup custom CI jobs (i.e. different +OS, architectures, etc) for FFmpeg or any other project that uses patchwork (see +https://github.com/getpatchwork/patchwork for more details). + +The script periodically queries the patchwork site for new patches. CI jobs are +run on new patches and the results are posted to the patchwork site. +In the context of FFmpeg, this includes running ./configure, make build, and make fate. + +-- CI jobs -- +The CI jobs are setup in class Job. This class has functions setup(), build(), +and unit_test(). In our implementation the functions start docker +containers which run ./configure, make build, and make fate. The containers are +launched using subprocess.run() which captures stdout, stderr, and success of +the process. The return struct is forwarded to the calling functions which +determines how to process the output (i.e. posts result to patchwork, notify +user by email). Custom jobs can therefore be created modifying the job class. Use of containers is +recomended for isolating the process. + +--- Caching results --- +The code currently uses a mysql database to track information and cache job +results. The config of the database are set by the environment variables: +PATCHWORK_DB_{HOST,USER,PASSWORD}. + +--- Automated emails --- +If a CI job fails, an automated email is triggered to the patch author with a +link to the patchwork site where the warning or error is shown. To prevent +spamming the author, only one email is triggered per patch series. An email is +also only sent if the parent commit builds successfully. Thus if current +origin/master doesn't build, an email will not be sent (unless a commit fixes +the issue and breaks it another commit of the series). The environment variables +for connecting to an SMTP server are +PATCHWORK_SMTP_{HOST,PORT}, PATCHWORK_{USER_EMAIL,PASSWORD_EMAIL}. +The environment variable PATCHWORK_CC_EMAIL is used to add a cc email address. + +--- Patchwork authentication --- +An account (on https://patchwork.ffmpeg.org) and proper permission are needed to post +CI results back to the patchwork site. Email your patchwork site maintainer in +(FFmpeg/MAINTERNERS) with your username if you want the permissions added to your account. +After the permissions are set up, an API token can be obtained after logging in +to the patchwork site. The environment variable PATCHWORK_TOKEN stores the api +token. The variable PATCHWORK_HOST needs to be set to patchwork.ffmpeg.org or +another patchwork site. + +-- Other environemnt variables -- +The following variables are used by the docker container for the CI jo
Re: [FFmpeg-devel] [PATCH v1 1/1] avcodec/vble: Return value check for init_get_bits
Then remove old incomplete checks. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [v7 PATCH 1/1] libavfilter/vf_grayworld: Add gray world filter
On Mon, Aug 30, 2021 at 5:36 PM Paul Buxton wrote: > Thats, great thanks. > > How did you generate the NANs? I don't believe under normal operation > (i.e. with data converted from YUV video to RGB) we would expect to see > negative values for the RGB data, so I guess it was from some specific > input tests? > Just the opposite, converting all YUV possible values to RGB will result with out of range values. > > Regards, > Paul > > On Sun, Aug 29, 2021 at 12:38 PM Paul B Mahol wrote: > >> >> Fixed some minor issues and major ones (NANs poping out) and applied. >> > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
NACK, code duplicated. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v2 1/1] avcodec/vble: Return value check for init_get_bits
avcodec/vble: Return value check for init_get_bits As the second argument for init_get_bits can be crafted, a return value check for this function call is necessary. So replace init_get_bits with init_get_bits8 and remove a duplicate check before the callsite. --- libavcodec/vble.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/libavcodec/vble.c b/libavcodec/vble.c index f1400959e0..c1d3cdcc95 100644 --- a/libavcodec/vble.c +++ b/libavcodec/vble.c @@ -127,7 +127,7 @@ static int vble_decode_frame(AVCodecContext *avctx, void *data, int *got_frame, int ret; ThreadFrame frame = { .f = data }; -if (avpkt->size < 4 || avpkt->size - 4 > INT_MAX/8) { +if (avpkt->size < 4) { av_log(avctx, AV_LOG_ERROR, "Invalid packet size\n"); return AVERROR_INVALIDDATA; } @@ -146,7 +146,9 @@ static int vble_decode_frame(AVCodecContext *avctx, void *data, int *got_frame, if (version != 1) av_log(avctx, AV_LOG_WARNING, "Unsupported VBLE Version: %d\n", version); -init_get_bits(&gb, src + 4, (avpkt->size - 4) * 8); +ret = init_get_bits8(&gb, src + 4, avpkt->size - 4); +if (ret < 0) +return ret; /* Unpack */ if (vble_unpack(ctx, &gb) < 0) { -- 2.17.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
Soft Works: > Signed-off-by: softworkz > --- > v2 Update: > > - Implemented Andreas' suggestions > - overlay_subs filter: > - removed duplicated code > - implemented direct (no pre-conversion) blending of graphical > subtitle rects > - Supported input formats: > - all packed RGB formats (with and without alpha) > - yuv420p, yuv422p, yuv444p > > libavfilter/Makefile | 1 + > libavfilter/allfilters.c | 1 + > libavfilter/vf_overlay_subs.c | 549 ++ > libavfilter/vf_overlay_subs.h | 65 > 4 files changed, 616 insertions(+) > create mode 100644 libavfilter/vf_overlay_subs.c > create mode 100644 libavfilter/vf_overlay_subs.h > > +static const AVFilterPad overlay_subs_inputs[] = { > +{ > +.name = "main", > +.type = AVMEDIA_TYPE_VIDEO, > +.config_props = config_input_main, > +.flags= AVFILTERPAD_FLAG_NEEDS_WRITABLE, > +}, > +{ > +.name = "overlay", > +.type = AVMEDIA_TYPE_SUBTITLE, > +}, > +{ NULL } > +}; > + > +static const AVFilterPad overlay_subs_outputs[] = { > +{ > +.name = "default", > +.type = AVMEDIA_TYPE_VIDEO, > +.config_props = config_output, > +}, > +{ NULL } > +}; > + > +const AVFilter ff_vf_overlay_subs = { > +.name = "overlay_subs", > +.description = NULL_IF_CONFIG_SMALL("Overlay graphical subtitles on > top of the input."), > +.preinit = overlay_subs_framesync_preinit, > +.init = init, > +.uninit= uninit, > +.priv_size = sizeof(OverlaySubsContext), > +.priv_class= &overlay_subs_class, > +.query_formats = query_formats, > +.activate = activate, > +.inputs= overlay_subs_inputs, > +.outputs = overlay_subs_outputs, > +}; Did you test this? Did it work? It shouldn't: The sentinel for the inputs and outputs has been removed, see 8be701d9f7f77ff2282cc7fe6e0791ca5419de70. - Andreas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
> -Original Message- > From: ffmpeg-devel On Behalf Of > Andreas Rheinhardt > Sent: Monday, 30 August 2021 20:45 > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add > overlay_subs filter > > Soft Works: > > Signed-off-by: softworkz > > --- > > v2 Update: > > > > - Implemented Andreas' suggestions > > - overlay_subs filter: > > - removed duplicated code > > - implemented direct (no pre-conversion) blending of graphical > > subtitle rects > > - Supported input formats: > > - all packed RGB formats (with and without alpha) > > - yuv420p, yuv422p, yuv444p > > > > libavfilter/Makefile | 1 + > > libavfilter/allfilters.c | 1 + > > libavfilter/vf_overlay_subs.c | 549 > ++ > > libavfilter/vf_overlay_subs.h | 65 > > 4 files changed, 616 insertions(+) > > create mode 100644 libavfilter/vf_overlay_subs.c > > create mode 100644 libavfilter/vf_overlay_subs.h > > > > +static const AVFilterPad overlay_subs_inputs[] = { > > +{ > > +.name = "main", > > +.type = AVMEDIA_TYPE_VIDEO, > > +.config_props = config_input_main, > > +.flags= AVFILTERPAD_FLAG_NEEDS_WRITABLE, > > +}, > > +{ > > +.name = "overlay", > > +.type = AVMEDIA_TYPE_SUBTITLE, > > +}, > > +{ NULL } > > +}; > > + > > +static const AVFilterPad overlay_subs_outputs[] = { > > +{ > > +.name = "default", > > +.type = AVMEDIA_TYPE_VIDEO, > > +.config_props = config_output, > > +}, > > +{ NULL } > > +}; > > + > > +const AVFilter ff_vf_overlay_subs = { > > +.name = "overlay_subs", > > +.description = NULL_IF_CONFIG_SMALL("Overlay graphical > subtitles on top of the input."), > > +.preinit = overlay_subs_framesync_preinit, > > +.init = init, > > +.uninit= uninit, > > +.priv_size = sizeof(OverlaySubsContext), > > +.priv_class= &overlay_subs_class, > > +.query_formats = query_formats, > > +.activate = activate, > > +.inputs= overlay_subs_inputs, > > +.outputs = overlay_subs_outputs, > > +}; > Did you test this? Did it work? It shouldn't: The sentinel for the > inputs and outputs has been removed, see > 8be701d9f7f77ff2282cc7fe6e0791ca5419de70. It's working fine but I haven't re-based for one or two weeks. Will do. Thanks for the hint. softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
> -Original Message- > From: ffmpeg-devel On Behalf Of > Paul B Mahol > Sent: Monday, 30 August 2021 20:10 > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add > overlay_subs filter > > NACK, code duplicated. Thanks for looking at my patch. While the code is based on the overlay filter's code, there are substantial differences: The overlay functions are taking AVSubtitleRect images for overlay, which are PAL8 images and those need quite a bit different treatment. For example, I'm only converting the 256 palette entries from RGBA to YUVA for effective blending. With those differences, it doesn't make sense to merge this into the existing functions in vf_overlay as it would result in ugly code which would be hard to maintain. There's unnecessary duplication though, between the new sub2video filter and the overlay_subs filter. I am planning to merge these into a single file as a next step. I'm planning to rename like this: - overlay_graphicsubs (this filter) - graphicsubs2video (in the same file) Analog to these I will submit the following two filters: - overlay_textsubs - textsubs2video (in the same file) => the current subtitles filter can be deprecated then (with overlay_textsubs, it won't be required anymore to open the input separately) Finally, I'm planning to add a modified eia608 filter variant: - eia608_split (video-in, video-out, subtitles-out) => it won't be required anymore to use the movie protocol for getting closed captions Please let me know whether it makes sense to you. Thanks, softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
No, I think you lack mayor skills and motivations to tackle this subject. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add overlay_subs filter
> -Original Message- > From: ffmpeg-devel On Behalf Of > Paul B Mahol > Sent: Monday, 30 August 2021 21:51 > To: FFmpeg development discussions and patches de...@ffmpeg.org> > Subject: Re: [FFmpeg-devel] [PATCH v2 4/8] avfilter/overlay_subs: Add > overlay_subs filter > > No, I think you lack mayor skills and motivations to tackle this > subject. You know nothing about me. Just focus on what I submit. softworkz ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 2/2] avcodec/libx265: add support for setting chroma sample location
On Sun, Aug 29, 2021 at 7:43 PM Jan Ekström wrote: > > Unlike libx264, libx265 does not handle the chroma format check > on its own side, so in order to not write out values which are > supposed to be ignored according to the specification, we limit > the writing out of chroma sample location to 4:2:0 only. > --- > libavcodec/libx265.c | 13 + > 1 file changed, 13 insertions(+) > > diff --git a/libavcodec/libx265.c b/libavcodec/libx265.c > index 71affbf61b..839b6ce9de 100644 > --- a/libavcodec/libx265.c > +++ b/libavcodec/libx265.c > @@ -211,6 +211,19 @@ static av_cold int libx265_encode_init(AVCodecContext > *avctx) > ctx->params->vui.matrixCoeffs= avctx->colorspace; > } > > +// chroma sample location values are to be ignored in case of non-4:2:0 > +// according to the specification, so we only write them out in case of > +// 4:2:0 (log2_chroma_{w,h} == 1). > +ctx->params->vui.bEnableChromaLocInfoPresentFlag = > +avctx->chroma_sample_location != AVCHROMA_LOC_UNSPECIFIED && > +desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1; Checked this another time, and this check seems correct. HWAccel pix_fmts like VAAPI are also marked as 4:2:0, but thankfully they're not on the supported list for this module. For the source of the check, it is based on the following in the specification: Otherwise (chroma_format_idc is not equal to 1), the values of the syntax elements chroma_sample_loc_type_top_field and chroma_sample_loc_type_bottom_field shall be ignored. When chroma_format_idc is equal to 2 (4:2:2 chroma format) or 3 (4:4:4 chroma format), the location of chroma samples is specified in clause 6.2. When chroma_format_idc is equal to 0, there is no chroma sample array. > + > +if (ctx->params->vui.bEnableChromaLocInfoPresentFlag) { > +ctx->params->vui.chromaSampleLocTypeTopField = > +ctx->params->vui.chromaSampleLocTypeBottomField = > +avctx->chroma_sample_location - 1; > +} For progressive content both values should be set to the same value. For interlaced content in theory the chroma location field value is not necessarily the same, but: 1. we have a single value in our APIs 2. x264 does this exact same thing, utilizes a single value interface and sets both fields to the same value. Jan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] avfilter/vf_scale: set the RGB matrix coefficients in case of RGB
On Sun, Aug 29, 2021 at 10:05 PM Jan Ekström wrote: > > On Sun, Aug 29, 2021 at 9:21 PM Paul B Mahol wrote: > > > > probably fine if fate passes > > Yea, FATE passes :) . I think this stuff not being noticed until now > is due to nothing checking the metadata values returned by vf_scale > (since pix_fmt and actual logic is not changed at all with these > output AVFrame metadata changes): > - The first fix I did was for RGB->YCbCr still being flagged as RGB > (and thus encoders like the libx264 wrapper would gladly comply, > leading to bugs like issue #9132 ) > - This one fixes the opposite conversion where your YCbCr input has > matrix coefficients configured, and the RGB output still has that > value as-is from the av_frame_copy_props call (and once again, encoder > complies). If there are no further comments, I will soon apply this to fix both sides of the YCbCr<->RGB conversion output in case the input format happens to have the matrix coefficients configured (and thus copied over by av_frame_copy_props). These can then be back-ported to 4.4 since it was the first release to plug input/filtered AVFrames' metadata into output, which brought the issue of input metadata being passed through as-is up. The only question in my mind was whether to set it to AVCOL_SPC_UNSPECIFIED or AVCOL_SPC_RGB . I chose the latter one since the value literally matches what we are checking there: If output pix_fmt is RGB, set output value to RGB. Jan ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH] fftools/cmdutils: Fix warning for initialization makes integer from pointer without a cast
On Fri, Aug 20, 2021 at 09:29:33PM +0800, lance.lmw...@gmail.com wrote: > From: Limin Wang > > Signed-off-by: Limin Wang > --- > fftools/cmdutils.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/fftools/cmdutils.c b/fftools/cmdutils.c > index 2dd035a..ae34118 100644 > --- a/fftools/cmdutils.c > +++ b/fftools/cmdutils.c > @@ -853,7 +853,7 @@ int opt_cpucount(void *optctx, const char *opt, const > char *arg) > int count; > > static const AVOption opts[] = { > -{"count", NULL, 0, AV_OPT_TYPE_INT, { .i64 = -1}, -1, INT_MAX, NULL}, > +{"count", NULL, 0, AV_OPT_TYPE_INT, { .i64 = -1}, -1, INT_MAX}, > {NULL}, > }; > static const AVClass class = { > -- > 1.8.3.1 > will apply tomorrow unless there are objections. -- Thanks, Limin Wang ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 01/10] libavfilter/vulkan: Fix problem when device have queue_count greater than 1
From: "Chen,Wenbin" If the descriptorSetCount is greater than the number of setLayouts, vkAllocateDescriptorSets will report error. Now fix it. Now the following command can run on the device that has queue_count greater than one: ffmpeg -v verbose -init_hw_device vulkan=vul:0 -filter_hw_device vul -i input1080p.264 -vf "hwupload=extra_hw_frames=16,scale_vulkan=1920:1080, hwdownload,format=yuv420p" -f rawvideo output.yuv Signed-off-by: Wenbin Chen --- libavfilter/vulkan.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libavfilter/vulkan.c b/libavfilter/vulkan.c index 337c8d7d5a..e5b070b3e6 100644 --- a/libavfilter/vulkan.c +++ b/libavfilter/vulkan.c @@ -1160,7 +1160,7 @@ void ff_vk_update_descriptor_set(AVFilterContext *avctx, VulkanPipeline *pl, VulkanFilterContext *s = avctx->priv; vkUpdateDescriptorSetWithTemplate(s->hwctx->act_dev, - pl->desc_set[s->cur_queue_idx * pl->desc_layout_num + set_id], + pl->desc_set[set_id], pl->desc_template[set_id], s); } @@ -1179,7 +1179,7 @@ int ff_vk_init_pipeline_layout(AVFilterContext *avctx, VulkanPipeline *pl) VkResult ret; VulkanFilterContext *s = avctx->priv; -pl->descriptor_sets_num = pl->desc_layout_num * s->queue_count; +pl->descriptor_sets_num = pl->desc_layout_num; { /* Init descriptor set pool */ VkDescriptorPoolCreateInfo pool_create_info = { -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 02/10] libavutil/hwcontext_vulkan: fix a tile mismatch problem
From: "Chen,Wenbin" We should configure VkImageSubresource according to tiling rather than extension. We use extension to set tiling only when we map from drm. Normally the output VkImages are not created in this way, and it will report error when we map these VkImage to drm, so we should configure VkImageSubresource according to tiling rather than extension. Now fix it. Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 94fdad7f06..88db5b8b70 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -2888,7 +2888,7 @@ static int vulkan_map_to_drm(AVHWFramesContext *hwfc, AVFrame *dst, for (int i = 0; i < drm_desc->nb_layers; i++) { VkSubresourceLayout layout; VkImageSubresource sub = { -.aspectMask = p->extensions & EXT_DRM_MODIFIER_FLAGS ? +.aspectMask = f->tiling == VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT ? VK_IMAGE_ASPECT_MEMORY_PLANE_0_BIT_EXT : VK_IMAGE_ASPECT_COLOR_BIT, }; -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 03/10] libavfilter/vulkan: Fix the way to use sem
From: "Chen,Wenbin" We chould set waitSem and signalSem differently. Current ffmpeg-vulkan uses the same sem to set waitSem and signalSem and it doesn't work on latest intel-vulkan-driver. The commit: a193060221c4df123e26a562949cae5df3e73cde on mesa causes this problem. This commit add code to resets the signalSem. This will reset waitSem too on current ffmpeg-vulkan. Now set waitSem and signalSem separetely. Now the following command can run on the latest mesa on intel platform: ffmpeg -v verbose -init_hw_device vulkan=vul:0,linear_images=1 -filter_hw_device vul -i input1080p.264 -vf "hwupload=extra_hw_frames=16,scale_vulkan=1920:1080, hwdownload,format=yuv420p" -f rawvideo output.yuv Signed-off-by: Wenbin Chen --- libavfilter/vf_avgblur_vulkan.c | 4 +-- libavfilter/vf_chromaber_vulkan.c | 4 +-- libavfilter/vf_overlay_vulkan.c | 6 ++-- libavfilter/vf_scale_vulkan.c | 4 +-- libavfilter/vulkan.c | 55 +-- libavfilter/vulkan.h | 3 +- libavutil/hwcontext_vulkan.c | 14 7 files changed, 50 insertions(+), 40 deletions(-) diff --git a/libavfilter/vf_avgblur_vulkan.c b/libavfilter/vf_avgblur_vulkan.c index 5ae487fc8c..d2104c191e 100644 --- a/libavfilter/vf_avgblur_vulkan.c +++ b/libavfilter/vf_avgblur_vulkan.c @@ -304,8 +304,8 @@ static int process_frames(AVFilterContext *avctx, AVFrame *out_f, AVFrame *tmp_f vkCmdDispatch(cmd_buf, s->vkctx.output_width, FFALIGN(s->vkctx.output_height, CGS)/CGS, 1); -ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); -ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); +ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 1); +ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0); err = ff_vk_submit_exec_queue(avctx, s->exec); if (err) diff --git a/libavfilter/vf_chromaber_vulkan.c b/libavfilter/vf_chromaber_vulkan.c index 96fdd7bd9c..fe66a31cea 100644 --- a/libavfilter/vf_chromaber_vulkan.c +++ b/libavfilter/vf_chromaber_vulkan.c @@ -249,8 +249,8 @@ static int process_frames(AVFilterContext *avctx, AVFrame *out_f, AVFrame *in_f) FFALIGN(s->vkctx.output_width, CGROUPS[0])/CGROUPS[0], FFALIGN(s->vkctx.output_height, CGROUPS[1])/CGROUPS[1], 1); -ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); -ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); +ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 1); +ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0); err = ff_vk_submit_exec_queue(avctx, s->exec); if (err) diff --git a/libavfilter/vf_overlay_vulkan.c b/libavfilter/vf_overlay_vulkan.c index 1815709d82..2e5bef5be5 100644 --- a/libavfilter/vf_overlay_vulkan.c +++ b/libavfilter/vf_overlay_vulkan.c @@ -331,9 +331,9 @@ static int process_frames(AVFilterContext *avctx, AVFrame *out_f, FFALIGN(s->vkctx.output_width, CGROUPS[0])/CGROUPS[0], FFALIGN(s->vkctx.output_height, CGROUPS[1])/CGROUPS[1], 1); -ff_vk_add_exec_dep(avctx, s->exec, main_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); -ff_vk_add_exec_dep(avctx, s->exec, overlay_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); -ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); +ff_vk_add_exec_dep(avctx, s->exec, main_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 1); +ff_vk_add_exec_dep(avctx, s->exec, overlay_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 1); +ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0); err = ff_vk_submit_exec_queue(avctx, s->exec); if (err) diff --git a/libavfilter/vf_scale_vulkan.c b/libavfilter/vf_scale_vulkan.c index 4eb4fe5664..0d946e0416 100644 --- a/libavfilter/vf_scale_vulkan.c +++ b/libavfilter/vf_scale_vulkan.c @@ -377,8 +377,8 @@ static int process_frames(AVFilterContext *avctx, AVFrame *out_f, AVFrame *in_f) FFALIGN(s->vkctx.output_width, CGROUPS[0])/CGROUPS[0], FFALIGN(s->vkctx.output_height, CGROUPS[1])/CGROUPS[1], 1); -ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); -ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT); +ff_vk_add_exec_dep(avctx, s->exec, in_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 1); +ff_vk_add_exec_dep(avctx, s->exec, out_f, VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT, 0); err = ff_vk_submit_exec_queue(avctx, s->exec); if (err) diff --git a/libavfilter/vulkan.c b/libavfilter/vulkan.c index e5b070b3e6..e8cbf66b2b 100644 --- a/libavfilter/vulkan.c +++ b/libavfilter/vulkan.c @@ -462,9 +462,10 @@ VkCommandBuffer ff_vk_get_exec_buf(AVFilterContext *avctx, FFVkExecContext *e) } int ff_vk_add_exec_dep(AVFilterContext *avctx, FFVkExecConte
[FFmpeg-devel] [PATCH 04/10] hwcontext_vaapi: Use PRIME_2 memory type for modifiers.
From: Bas Nieuwenhuizen This way we can pass explicit modifiers in. Sometimes the modifier matters for the number of memory planes that libva accepts, in particular when dealing with driver-compressed textures. Furthermore the driver might not actually be able to determine the implicit modifier if all the buffer-passing has used explicit modifier. All these issues should be resolved by passing in the modifier, and for that we switch to using the PRIME_2 memory type. Tested with experimental radeonsi patches for modifiers and kmsgrab. Also tested with radeonsi without the patches to double-check it works without PRIME_2 support. v2: Cache PRIME_2 support to avoid doing two calls every time on libva drivers that do not support it. v3: Remove prime2_vas usage. --- libavutil/hwcontext_vaapi.c | 158 ++-- 1 file changed, 114 insertions(+), 44 deletions(-) diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c index 83e542876d..75acc851d6 100644 --- a/libavutil/hwcontext_vaapi.c +++ b/libavutil/hwcontext_vaapi.c @@ -79,6 +79,9 @@ typedef struct VAAPIFramesContext { unsigned int rt_format; // Whether vaDeriveImage works. int derive_works; +// Caches whether VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME_2 is unsupported for +// surface imports. +int prime_2_import_unsupported; } VAAPIFramesContext; typedef struct VAAPIMapping { @@ -1022,32 +1025,17 @@ static void vaapi_unmap_from_drm(AVHWFramesContext *dst_fc, static int vaapi_map_from_drm(AVHWFramesContext *src_fc, AVFrame *dst, const AVFrame *src, int flags) { +VAAPIFramesContext *src_vafc = src_fc->internal->priv; AVHWFramesContext *dst_fc = (AVHWFramesContext*)dst->hw_frames_ctx->data; AVVAAPIDeviceContext *dst_dev = dst_fc->device_ctx->hwctx; const AVDRMFrameDescriptor *desc; const VAAPIFormatDescriptor *format_desc; VASurfaceID surface_id; -VAStatus vas; +VAStatus vas = VA_STATUS_SUCCESS; +int use_prime2; uint32_t va_fourcc; -int err, i, j, k; - -unsigned long buffer_handle; -VASurfaceAttribExternalBuffers buffer_desc; -VASurfaceAttrib attrs[2] = { -{ -.type = VASurfaceAttribMemoryType, -.flags = VA_SURFACE_ATTRIB_SETTABLE, -.value.type= VAGenericValueTypeInteger, -.value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME, -}, -{ -.type = VASurfaceAttribExternalBufferDescriptor, -.flags = VA_SURFACE_ATTRIB_SETTABLE, -.value.type= VAGenericValueTypePointer, -.value.value.p = &buffer_desc, -} -}; +int err, i, j; desc = (AVDRMFrameDescriptor*)src->data[0]; @@ -1083,35 +1071,117 @@ static int vaapi_map_from_drm(AVHWFramesContext *src_fc, AVFrame *dst, format_desc = vaapi_format_from_fourcc(va_fourcc); av_assert0(format_desc); -buffer_handle = desc->objects[0].fd; -buffer_desc.pixel_format = va_fourcc; -buffer_desc.width= src_fc->width; -buffer_desc.height = src_fc->height; -buffer_desc.data_size= desc->objects[0].size; -buffer_desc.buffers = &buffer_handle; -buffer_desc.num_buffers = 1; -buffer_desc.flags= 0; - -k = 0; -for (i = 0; i < desc->nb_layers; i++) { -for (j = 0; j < desc->layers[i].nb_planes; j++) { -buffer_desc.pitches[k] = desc->layers[i].planes[j].pitch; -buffer_desc.offsets[k] = desc->layers[i].planes[j].offset; -++k; +use_prime2 = !src_vafc->prime_2_import_unsupported && + desc->objects[0].format_modifier != DRM_FORMAT_MOD_INVALID; +if (use_prime2) { +VADRMPRIMESurfaceDescriptor prime_desc; +VASurfaceAttrib prime_attrs[2] = { +{ +.type = VASurfaceAttribMemoryType, +.flags = VA_SURFACE_ATTRIB_SETTABLE, +.value.type= VAGenericValueTypeInteger, +.value.value.i = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME_2, +}, +{ +.type = VASurfaceAttribExternalBufferDescriptor, +.flags = VA_SURFACE_ATTRIB_SETTABLE, +.value.type= VAGenericValueTypePointer, +.value.value.p = &prime_desc, +} +}; +prime_desc.fourcc = va_fourcc; +prime_desc.width = src_fc->width; +prime_desc.height = src_fc->height; +prime_desc.num_objects = desc->nb_objects; +for (i = 0; i < desc->nb_objects; ++i) { +prime_desc.objects[i].fd = desc->objects[i].fd; +prime_desc.objects[i].size = desc->objects[i].size; +prime_desc.objects[i].drm_format_modifier = +desc->objects[i].format_modifier; } -} -buffer_desc.num_planes = k; -if (format_desc->chroma_planes_swapped && -buffer_desc.nu
[FFmpeg-devel] [PATCH 05/10] libavutil/hwcontext_vaapi: Add a new nv12 format map to support vulkan frame
From: "Chen,Wenbin" Vulkan will map nv12 to R8 and GR88, so add this map to vaapi to support vulkan frame. Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vaapi.c | 1 + 1 file changed, 1 insertion(+) diff --git a/libavutil/hwcontext_vaapi.c b/libavutil/hwcontext_vaapi.c index 75acc851d6..994b744e4d 100644 --- a/libavutil/hwcontext_vaapi.c +++ b/libavutil/hwcontext_vaapi.c @@ -992,6 +992,7 @@ static const struct { } vaapi_drm_format_map[] = { #ifdef DRM_FORMAT_R8 DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_RG88), +DRM_MAP(NV12, 2, DRM_FORMAT_R8, DRM_FORMAT_GR88), #endif DRM_MAP(NV12, 1, DRM_FORMAT_NV12), #if defined(VA_FOURCC_P010) && defined(DRM_FORMAT_R16) -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 06/10] libavutil/hwcontext_vulkan: Add one_memory flag to make vulkan compatible with vaapi device.
From: "Chen,Wenbin" Vaapi can import external surface, but all the planes of the external frames should be in the same drm object. A new flag is introduced and vulkan can choose to allocate planes in one memory according this flag. This flag will be enabled when the vulkan device is derived from vaapi device, so that this change will not affect current vulkan behaviour. Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 12 +++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 9a29267aed..6417f59d4a 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -211,6 +211,9 @@ typedef struct VulkanDevicePriv { /* Settings */ int use_linear_images; +/* map all planes to one memory */ +int use_one_memory; + /* Nvidia */ int dev_is_nvidia; } VulkanDevicePriv; @@ -1321,6 +1324,11 @@ static int vulkan_device_create_internal(AVHWDeviceContext *ctx, if (opt_d) p->use_linear_images = strtol(opt_d->value, NULL, 10); +opt_d = av_dict_get(opts, "one_memory", NULL, 0); +if (opt_d) +p->use_one_memory = strtol(opt_d->value, NULL, 10); + + hwctx->enabled_dev_extensions = dev_info.ppEnabledExtensionNames; hwctx->nb_enabled_dev_extensions = dev_info.enabledExtensionCount; @@ -1443,8 +1451,10 @@ static int vulkan_device_derive(AVHWDeviceContext *ctx, return AVERROR_EXTERNAL; } -if (strstr(vendor, "Intel")) +if (strstr(vendor, "Intel")) { +av_dict_set_int(&opts, "one_memory", 1, 0); dev_select.vendor_id = 0x8086; +} if (strstr(vendor, "AMD")) dev_select.vendor_id = 0x1002; -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 08/10] libavutil/hwcontext_vulkan: fix wrong offset of plane
From: "Chen,Wenbin" According to spec, if we use VkBindImagePlaneMemoryInfo to bind image we mush create image with disjoint flag. The offset in subresourcelayout is relative to the base address of the plane, but the offset in drm is relative to the drm objectis so I think this offset should be 0. Also, when I import vaapi frame to vulkan I got broken frame, and setting plane_data->offset to 0 makes command works. Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 4983518a77..3a639c997b 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -2382,7 +2382,9 @@ static int vulkan_map_from_drm_frame_desc(AVHWFramesContext *hwfc, AVVkFrame **f .extent.depth = 1, .mipLevels = 1, .arrayLayers = 1, -.flags = VK_IMAGE_CREATE_ALIAS_BIT, +.flags = VK_IMAGE_CREATE_ALIAS_BIT | + (has_modifiers && planes > 1) ? VK_IMAGE_CREATE_DISJOINT_BIT : + 0, .tiling= f->tiling, .initialLayout = VK_IMAGE_LAYOUT_UNDEFINED, /* specs say so */ .usage = frames_hwctx->usage, @@ -2397,7 +2399,7 @@ static int vulkan_map_from_drm_frame_desc(AVHWFramesContext *hwfc, AVVkFrame **f hwfc->sw_format, src->width, src->height, i); for (int j = 0; j < planes; j++) { -plane_data[j].offset = desc->layers[i].planes[j].offset; +plane_data[j].offset = 0; plane_data[j].rowPitch = desc->layers[i].planes[j].pitch; plane_data[j].size = 0; /* The specs say so for all 3 */ plane_data[j].arrayPitch = 0; -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 07/10] libavutil/hwcontext_vulkan: Allocate vkFrame in one memory
From: "Chen,Wenbin" The vaapi can import external frame, but the planes of the external frames should be in the same drm object. I add a new function to allocate vkFrame in one memory and vulkan device will choose a way to allocate memory according to one_memory flag. A new variable is added to AVVKFrame to store the offset of each plane. Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 46 +++- libavutil/hwcontext_vulkan.h | 1 + 2 files changed, 46 insertions(+), 1 deletion(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 6417f59d4a..4983518a77 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -1667,6 +1667,9 @@ static int alloc_bind_mem(AVHWFramesContext *hwfc, AVVkFrame *f, VulkanFunctions *vk = &p->vkfn; const int planes = av_pix_fmt_count_planes(hwfc->sw_format); VkBindImageMemoryInfo bind_info[AV_NUM_DATA_POINTERS] = { { 0 } }; +VkMemoryRequirements memory_requirements = { 0 }; +int mem_size = 0; +int mem_size_list[AV_NUM_DATA_POINTERS] = { 0 }; AVVulkanDeviceContext *hwctx = ctx->hwctx; @@ -1694,6 +1697,23 @@ static int alloc_bind_mem(AVHWFramesContext *hwfc, AVVkFrame *f, req.memoryRequirements.size = FFALIGN(req.memoryRequirements.size, p->props.properties.limits.minMemoryMapAlignment); +if (p->use_one_memory) { +if (ded_req.prefersDedicatedAllocation | ded_req.requiresDedicatedAllocation) { +av_log(hwfc, AV_LOG_ERROR, "Cannot use dedicated allocation for intel vaapi\n"); +return AVERROR(EINVAL); +} +if (memory_requirements.size == 0) { +memory_requirements = req.memoryRequirements; +} else if (memory_requirements.memoryTypeBits != req.memoryRequirements.memoryTypeBits) { +av_log(hwfc, AV_LOG_ERROR, "the param for each planes are not the same\n"); +return AVERROR(EINVAL); +} + +mem_size_list[i] = req.memoryRequirements.size; +mem_size += mem_size_list[i]; +continue; +} + /* In case the implementation prefers/requires dedicated allocation */ use_ded_mem = ded_req.prefersDedicatedAllocation | ded_req.requiresDedicatedAllocation; @@ -1715,6 +1735,29 @@ static int alloc_bind_mem(AVHWFramesContext *hwfc, AVVkFrame *f, bind_info[i].memory = f->mem[i]; } +if (p->use_one_memory) { +memory_requirements.size = mem_size; + +/* Allocate memory */ +if ((err = alloc_mem(ctx, &memory_requirements, +f->tiling == VK_IMAGE_TILING_LINEAR ? +VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT : +VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, +(void *)(((uint8_t *)alloc_pnext)), +&f->flags, &f->mem[0]))) +return err; + +f->size[0] = memory_requirements.size; + +for (int i = 0; i < planes; i++) { +bind_info[i].sType = VK_STRUCTURE_TYPE_BIND_IMAGE_MEMORY_INFO; +bind_info[i].image = f->img[i]; +bind_info[i].memory = f->mem[0]; +bind_info[i].memoryOffset = i == 0 ? 0 : mem_size_list[i-1]; +f->offset[i] = bind_info[i].memoryOffset; +} +} + /* Bind the allocated memory to the images */ ret = vk->BindImageMemory2(hwctx->act_dev, planes, bind_info); if (ret != VK_SUCCESS) { @@ -2921,7 +2964,8 @@ static int vulkan_map_to_drm(AVHWFramesContext *hwfc, AVFrame *dst, continue; vk->GetImageSubresourceLayout(hwctx->act_dev, f->img[i], &sub, &layout); -drm_desc->layers[i].planes[0].offset = layout.offset; +drm_desc->layers[i].planes[0].offset = p->use_one_memory ? +f->offset[i] : layout.offset; drm_desc->layers[i].planes[0].pitch= layout.rowPitch; } diff --git a/libavutil/hwcontext_vulkan.h b/libavutil/hwcontext_vulkan.h index e4645527d7..8fb25d1485 100644 --- a/libavutil/hwcontext_vulkan.h +++ b/libavutil/hwcontext_vulkan.h @@ -182,6 +182,7 @@ typedef struct AVVkFrame { */ VkDeviceMemory mem[AV_NUM_DATA_POINTERS]; size_t size[AV_NUM_DATA_POINTERS]; +size_t offset[AV_NUM_DATA_POINTERS]; /** * OR'd flags for all memory allocated -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH 09/10] libavutil/hwcontext_vulkan: specify the modifier to create VKImage
From: Wenbin Chen On the lastset intel-vulkan-driver the VK_EXT_image_drm_format_modifier flags is enabled. As what driver log recommand, we need to use VK_IMAGE_TILING_LINEAR or VK_IMAGE_DRM_FORMAT_MODIFIER_EXT to create VKImage. Add code to get supported modifier for sw_format and use these modifier to create VKImage. Now the following command line works: ffmpeg -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format vaapi -i input_1080p.264 -vf "hwmap=derive_device=vulkan,format=vulkan, scale_vulkan=1920:1080,hwmap=derive_device=vaapi,format=vaapi" -c:v h264_vaapi output.264 Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 77 +--- libavutil/hwcontext_vulkan.h | 5 +++ 2 files changed, 76 insertions(+), 6 deletions(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 3a639c997b..99b2190dc3 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -1967,6 +1967,8 @@ static void try_export_flags(AVHWFramesContext *hwfc, AVVulkanDeviceContext *dev_hwctx = hwfc->device_ctx->hwctx; VulkanDevicePriv *p = hwfc->device_ctx->internal->priv; VulkanFunctions *vk = &p->vkfn; +const int has_modifiers = hwctx->tiling == VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT; + VkExternalImageFormatProperties eprops = { .sType = VK_STRUCTURE_TYPE_EXTERNAL_IMAGE_FORMAT_PROPERTIES_KHR, }; @@ -1974,9 +1976,18 @@ static void try_export_flags(AVHWFramesContext *hwfc, .sType = VK_STRUCTURE_TYPE_IMAGE_FORMAT_PROPERTIES_2, .pNext = &eprops, }; +VkPhysicalDeviceImageDrmFormatModifierInfoEXT phy_dev_mod_info = { +.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGE_DRM_FORMAT_MODIFIER_INFO_EXT, +.pNext = NULL, +.pQueueFamilyIndices = p->qfs, +.queueFamilyIndexCount = p->num_qfs, +.sharingMode = p->num_qfs > 1 ? VK_SHARING_MODE_CONCURRENT : + VK_SHARING_MODE_EXCLUSIVE, +}; VkPhysicalDeviceExternalImageFormatInfo enext = { .sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_EXTERNAL_IMAGE_FORMAT_INFO, .handleType = exp, +.pNext = has_modifiers ? &phy_dev_mod_info : NULL, }; VkPhysicalDeviceImageFormatInfo2 pinfo = { .sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_IMAGE_FORMAT_INFO_2, @@ -1988,11 +1999,15 @@ static void try_export_flags(AVHWFramesContext *hwfc, .flags = VK_IMAGE_CREATE_ALIAS_BIT, }; -ret = vk->GetPhysicalDeviceImageFormatProperties2(dev_hwctx->phys_dev, - &pinfo, &props); -if (ret == VK_SUCCESS) { -*iexp |= exp; -*comp_handle_types |= eprops.externalMemoryProperties.compatibleHandleTypes; +for (int i = 0; i < (has_modifiers ? hwctx->modifier_count : 1); i++) { +if (has_modifiers && hwctx->modifier_count) +phy_dev_mod_info.drmFormatModifier = hwctx->modifiers[i]; +ret = vk->GetPhysicalDeviceImageFormatProperties2(dev_hwctx->phys_dev, +&pinfo, &props); +if (ret == VK_SUCCESS) { +*iexp |= exp; +*comp_handle_types |= eprops.externalMemoryProperties.compatibleHandleTypes; +} } } @@ -2055,6 +2070,7 @@ fail: static void vulkan_frames_uninit(AVHWFramesContext *hwfc) { VulkanFramesPriv *fp = hwfc->internal->priv; +AVVulkanFramesContext *hwctx = hwfc->hwctx; free_exec_ctx(hwfc, &fp->conv_ctx); free_exec_ctx(hwfc, &fp->upload_ctx); @@ -2069,11 +2085,60 @@ static int vulkan_frames_init(AVHWFramesContext *hwfc) VulkanFramesPriv *fp = hwfc->internal->priv; AVVulkanDeviceContext *dev_hwctx = hwfc->device_ctx->hwctx; VulkanDevicePriv *p = hwfc->device_ctx->internal->priv; +const int has_modifiers = !!(p->extensions & EXT_DRM_MODIFIER_FLAGS); /* Default pool flags */ -hwctx->tiling = hwctx->tiling ? hwctx->tiling : p->use_linear_images ? +hwctx->tiling = hwctx->tiling ? hwctx->tiling : has_modifiers ? +VK_IMAGE_TILING_DRM_FORMAT_MODIFIER_EXT : p->use_linear_images ? VK_IMAGE_TILING_LINEAR : VK_IMAGE_TILING_OPTIMAL; +/* get the supported modifier */ +if (has_modifiers) { +const VkFormat *fmt = av_vkfmt_from_pixfmt(hwfc->sw_format); +VulkanFunctions *vk = &p->vkfn; +VkDrmFormatModifierPropertiesEXT mod_props[MAX_VULKAN_MODIFIERS]; + +VkDrmFormatModifierPropertiesListEXT mod_props_list = { +.sType = VK_STRUCTURE_TYPE_DRM_FORMAT_MODIFIER_PROPERTIES_LIST_EXT, +.pNext = NULL, +.drmFormatModifierCount = 0, +.pDrmFormatModifierProperties = NULL, +}; +VkFormatProperties2 prop = { +.sType = VK_STRUCTURE_TYPE_FORMAT_PROPERTIES_2, +.pNext = &mod_props_list, +}; + +vk
[FFmpeg-devel] [PATCH 10/10] libavutil/hwcontext_vulkan: Add hwupload and hwdownload support when using one_memory flag.
From: "Chen,Wenbin" Add hwupload and hwdownload support to vulkan when frames are allocated in one memory Signed-off-by: Wenbin Chen --- libavutil/hwcontext_vulkan.c | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/libavutil/hwcontext_vulkan.c b/libavutil/hwcontext_vulkan.c index 99b2190dc3..5dfa7adc6b 100644 --- a/libavutil/hwcontext_vulkan.c +++ b/libavutil/hwcontext_vulkan.c @@ -2252,7 +2252,7 @@ static int vulkan_map_frame_to_mem(AVHWFramesContext *hwfc, AVFrame *dst, const AVFrame *src, int flags) { VkResult ret; -int err, mapped_mem_count = 0; +int err, mapped_mem_count = 0, loop = 0; AVVkFrame *f = (AVVkFrame *)src->data[0]; AVVulkanDeviceContext *hwctx = hwfc->device_ctx->hwctx; const int planes = av_pix_fmt_count_planes(hwfc->sw_format); @@ -2281,7 +2281,8 @@ static int vulkan_map_frame_to_mem(AVHWFramesContext *hwfc, AVFrame *dst, dst->width = src->width; dst->height = src->height; -for (int i = 0; i < planes; i++) { +loop = p->use_one_memory ? 1 : planes; +for (int i = 0; i < loop; i++) { ret = vk->MapMemory(hwctx->act_dev, f->mem[i], 0, VK_WHOLE_SIZE, 0, (void **)&dst->data[i]); if (ret != VK_SUCCESS) { @@ -2292,6 +2293,11 @@ static int vulkan_map_frame_to_mem(AVHWFramesContext *hwfc, AVFrame *dst, } mapped_mem_count++; } +if (p->use_one_memory) { +for (int i = 0; i < planes; i++) { +dst->data[i] = dst->data[0] + f->offset[i]; +} +} /* Check if the memory contents matter */ if (((flags & AV_HWFRAME_MAP_READ) || !(flags & AV_HWFRAME_MAP_OVERWRITE)) && -- 2.25.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
[FFmpeg-devel] [PATCH v1] avcodec/av1dec: modify error message
This will gives out more accurate information in case of this decoder used but doesn't specificed a hwaccel. For example: ffmpeg -c:v av1 -i INPUT OUTPUT Signed-off-by: Fei Wang --- This is improvement for patch: https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=4660 https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=4694 libavcodec/av1dec.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libavcodec/av1dec.c b/libavcodec/av1dec.c index a69808f7b6..98db60a57f 100644 --- a/libavcodec/av1dec.c +++ b/libavcodec/av1dec.c @@ -462,8 +462,8 @@ static int get_pixel_format(AVCodecContext *avctx) * implemented in the future, need remove this check. */ if (!avctx->hwaccel) { -av_log(avctx, AV_LOG_ERROR, "Your platform doesn't suppport" - " hardware accelerated AV1 decoding.\n"); +av_log(avctx, AV_LOG_ERROR, "The AV1 decoder requires a hw acceleration" + " to be specified or the specified hw doesn't support AV1 decoding.\n"); return AVERROR(ENOSYS); } -- 2.17.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 01/10] libavfilter/vulkan: Fix problem when device have queue_count greater than 1
Hello, On Tue, 31 Aug 2021, at 03:43, wenbin.c...@intel.com wrote: > From: "Chen,Wenbin" > ... > Signed-off-by: Wenbin Chen > ... > email sent by "wenbin.c...@intel.com" (no name) In this thread of patches, you have 3 ways of writing your name & email. You should fix it (and IMHO, use the last one "Wenbin Chen "). best -- Jean-Baptiste Kempf - President +33 672 704 734 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH v2 1/1] avcodec/vble: Return value check for init_get_bits
maryam ebrahimzadeh: > avcodec/vble: Return value check for init_get_bits > > As the second argument for init_get_bits can be crafted, > a return value check for this function call is necessary. > So replace init_get_bits with init_get_bits8 and remove a duplicate check > before > the callsite. > > --- > libavcodec/vble.c | 6 -- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/libavcodec/vble.c b/libavcodec/vble.c > index f1400959e0..c1d3cdcc95 100644 > --- a/libavcodec/vble.c > +++ b/libavcodec/vble.c > @@ -127,7 +127,7 @@ static int vble_decode_frame(AVCodecContext *avctx, void > *data, int *got_frame, > int ret; > ThreadFrame frame = { .f = data }; > > -if (avpkt->size < 4 || avpkt->size - 4 > INT_MAX/8) { > +if (avpkt->size < 4) { > av_log(avctx, AV_LOG_ERROR, "Invalid packet size\n"); > return AVERROR_INVALIDDATA; > } > @@ -146,7 +146,9 @@ static int vble_decode_frame(AVCodecContext *avctx, void > *data, int *got_frame, > if (version != 1) > av_log(avctx, AV_LOG_WARNING, "Unsupported VBLE Version: %d\n", > version); > > -init_get_bits(&gb, src + 4, (avpkt->size - 4) * 8); > +ret = init_get_bits8(&gb, src + 4, avpkt->size - 4); > +if (ret < 0) > +return ret; > > /* Unpack */ > if (vble_unpack(ctx, &gb) < 0) { > Checking before the callsite has the advantage of not trying to allocate a huge buffer that ends up unused. So instead of removing said check it should be fixed: get_bits.h should properly export the maximum supported buffer size and that should be checked at the beginning. - Andreas ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 01/10] libavfilter/vulkan: Fix problem when device have queue_count greater than 1
Ok, I will submit again. > -Original Message- > From: ffmpeg-devel On Behalf Of > Jean-Baptiste Kempf > Sent: Tuesday, August 31, 2021 2:07 PM > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH 01/10] libavfilter/vulkan: Fix problem > when device have queue_count greater than 1 > > Hello, > > On Tue, 31 Aug 2021, at 03:43, wenbin.c...@intel.com wrote: > > From: "Chen,Wenbin" > > ... > > Signed-off-by: Wenbin Chen > > ... > > email sent by "wenbin.c...@intel.com" (no name) > > In this thread of patches, you have 3 ways of writing your name & email. > > You should fix it (and IMHO, use the last one "Wenbin Chen > "). > > best > > -- > Jean-Baptiste Kempf - President > +33 672 704 734 > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel > > To unsubscribe, visit link above, or email > ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe". ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".
Re: [FFmpeg-devel] [PATCH 01/10] libavfilter/vulkan: Fix problem when device have queue_count greater than 1
On Tue, 31 Aug 2021, at 08:44, Chen, Wenbin wrote: > Ok, I will submit again. Wait for the review. :D -- Jean-Baptiste Kempf - President +33 672 704 734 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".