On Sun, Mar 13, 2016 at 09:09:39PM +0100, Thilo Borgmann wrote: > Am 13.03.16 um 20:55 schrieb Nicolas George: > > Le quartidi 24 ventôse, an CCXXIV, Thilo Borgmann a écrit : > >> + { "list_filters", "list available filters", OFFSET(list_filters), > >> AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, .flags = FLAGS, "list_filters" }, > >> + { "true", "", 0, AV_OPT_TYPE_CONST, {.i64=1}, 0, 0, FLAGS, > >> "list_filters" }, > >> + { "false", "", 0, AV_OPT_TYPE_CONST, {.i64=0}, 0, 0, FLAGS, > >> "list_filters" }, > > > > You forgot to remove the constants. > > Removed. Patch attached. > > -Thilo >
> From 4aef8c0d09e109cedd92e17cc04a6ef6236c07ab Mon Sep 17 00:00:00 2001 > From: Thilo Borgmann <thilo.borgm...@mail.de> > Date: Sun, 13 Mar 2016 21:08:18 +0100 > Subject: [PATCH 2/2] lavf: Add coreimage filter for GPU based image filtering > on OSX. > > --- > Changelog | 1 + > MAINTAINERS | 1 + > configure | 2 + > doc/filters.texi | 67 ++++++ > libavfilter/Makefile | 1 + > libavfilter/allfilters.c | 1 + > libavfilter/vf_coreimage.m | 551 > +++++++++++++++++++++++++++++++++++++++++++++ > 7 files changed, 624 insertions(+) > create mode 100644 libavfilter/vf_coreimage.m > > diff --git a/Changelog b/Changelog > index 1f57f5e..5053a86 100644 > --- a/Changelog > +++ b/Changelog > @@ -12,6 +12,7 @@ version <next>: > - ciescope filter > - protocol blacklisting API > - MediaCodec H264 decoding > +- coreimage filter (GPU based image filtering on OSX) > > > version 3.0: > diff --git a/MAINTAINERS b/MAINTAINERS > index 531c21d..a993a67 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -370,6 +370,7 @@ Filters: > vf_colorbalance.c Paul B Mahol > vf_colorkey.c Timo Rothenpieler > vf_colorlevels.c Paul B Mahol > + vf_coreimage.m Thilo Borgmann > vf_deband.c Paul B Mahol > vf_dejudder.c Nicholas Robbins > vf_delogo.c Jean Delvare (CC <jdelv...@suse.com>) > diff --git a/configure b/configure > index 1b189328..da51e06 100755 > --- a/configure > +++ b/configure > @@ -5255,6 +5255,7 @@ frei0r_filter_extralibs='$ldl' > frei0r_src_filter_extralibs='$ldl' > ladspa_filter_extralibs='$ldl' > nvenc_encoder_extralibs='$ldl' > +coreimage_filter_extralibs="-framework QuartzCore -framework AppKit > -framework OpenGL" > > if ! disabled network; then > check_func getaddrinfo $network_extralibs > @@ -5483,6 +5484,7 @@ enabled avisynth && { { check_lib2 "windows.h" > LoadLibrary; } || > die "ERROR: LoadLibrary/dlopen not found for > avisynth"; } > enabled cuda && check_lib cuda.h cuInit -lcuda > enabled chromaprint && require chromaprint chromaprint.h > chromaprint_get_version -lchromaprint > +enabled coreimage_filter && { check_header_objcc QuartzCore/CoreImage.h || > disable coreimage_filter; } > enabled decklink && { check_header DeckLinkAPI.h || die "ERROR: > DeckLinkAPI.h header not found"; } > enabled frei0r && { check_header frei0r.h || die "ERROR: frei0r.h > header not found"; } > enabled gmp && require2 gmp gmp.h mpz_export -lgmp > diff --git a/doc/filters.texi b/doc/filters.texi > index d5d619e..7d0bb26 100644 > --- a/doc/filters.texi > +++ b/doc/filters.texi > @@ -4955,6 +4955,73 @@ convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 > 2:-2 -1 0 -1 1 1 0 1 2:-2 - > Copy the input source unchanged to the output. This is mainly useful for > testing purposes. > > +@anchor{coreimage} > +@section coreimage > + > +Video filtering on GPU using Apple's CoreImage API on OSX. > + > +Hardware acceleration is based on an OpenGL context. Usually, this means it > is processed by video hardware. However, software-based OpenGL > implementations exist which means there is no guarantee for hardware > processing. It depends on the respective OSX. > + can you wrap? (hint: select and "gq" in vim, dunno other editors). > +There are many filters and image generators provided by Apple that come with > a large variety of options. The filter has to be referenced by its name along > with its options. > + > +The coreimage filter accepts the following options: > +@table @option > +@item list_filters > +List all available filters along with all their respective options as well > as possible minimum and maximum values along with the default values. > +@example > + coreimage=list_filters=true > +@end example > + > +@item filter > +Specifiy all filters by their respective name and options. Specify > +Use @var{list_filters} to determine all valid filter names and options. > +Numerical options are specified by a float value and are automatically > clamped to their respective value range. > +Vector and color options have to be specified by a list of space separated > float values. Character escaping has to be done. > +A special option name @code{default} is available to use default options for > a filter. > +It is required to specify either @code{default} or at least one of the > filter options. > +All omitted options are used with their default values. > +The syntax of the filter string is as follows: > +@example > +filter=<NAME>@@<OPTION>=<VALUE>[@@<OPTION>=<VALUE>][@@...][#<NAME>@@<OPTION>=<VALUE>[@@<OPTION>=<VALUE>][@@...]] > +@end example > +@end table > + > +Several filters can be chained for successive processing without GPU-HOST > transfers allowing for fast processing of complex filter chains. > +Currently, only filters with zero (generators) or exactly one (filters) > input image and one output image are supported. > +Also, transition filters are not yet usable as intended. > + > +Some filters generate output images with additional padding depending on the > respective filter kernel. The padding is automatically removed to ensure the > filter output has the same size as the input image. > +For image generators, the size of the output image is determined by the > given input image. The generators do not use the pixel information of the > input image to generate their output. However, the generated output is > blended onto the input image, resulting in partial or complete coverage of > the output image. > + > +@subsection Examples > + > +@itemize > + > +@item > +List all filters available: > +@example > +coreimage=list_filters=true > +@end example > + > +@item > +Use the CIBoxBlur filter with default options to blur an image: > +@example > +coreimage=filter=CIBoxBlur@@default > +@end example > + > +@item > +Use a filter chain with CISepiaTone at default values and CIVignetteEffect > with its center at 100x100 and a radius of 50 pixels: > +@example > +coreimage=filter=CIBoxBlur@@default#CIVignetteEffect@@inputCenter=100\ > 100@@inputRadius=50 > +@end example > + > +@item > +Use nullsrc and CIQRCodeGenerator to create a QR code for the FFmpeg > homepage, given as complete and escaped command-line for Apple's standard > bash shell: > +@example > +./ffmpeg -f lavfi -i > nullsrc=s=100x100,coreimage=filter=CIQRCodeGenerator@@inputMessage=https\\\\\://FFmpeg.org/@@inputCorrectionLevel=H > -frames:v 1 QRCode.png remove ./ also, it's probably better to have 2 filters: one for usage as a source, and another one for filtering (coreimagesrc vs coreimage). > +@end example > +@end itemize > + > @section crop > > Crop the input video to given dimensions. > diff --git a/libavfilter/Makefile b/libavfilter/Makefile > index 956a077..9ce6559 100644 > --- a/libavfilter/Makefile > +++ b/libavfilter/Makefile > @@ -133,6 +133,7 @@ OBJS-$(CONFIG_COLORLEVELS_FILTER) += > vf_colorlevels.o > OBJS-$(CONFIG_COLORMATRIX_FILTER) += vf_colormatrix.o > OBJS-$(CONFIG_CONVOLUTION_FILTER) += vf_convolution.o > OBJS-$(CONFIG_COPY_FILTER) += vf_copy.o > +OBJS-$(CONFIG_COREIMAGE_FILTER) += vf_coreimage.o > OBJS-$(CONFIG_COVER_RECT_FILTER) += vf_cover_rect.o lavfutils.o > OBJS-$(CONFIG_CROP_FILTER) += vf_crop.o > OBJS-$(CONFIG_CROPDETECT_FILTER) += vf_cropdetect.o > diff --git a/libavfilter/allfilters.c b/libavfilter/allfilters.c > index e5080b5..91b0dde 100644 > --- a/libavfilter/allfilters.c > +++ b/libavfilter/allfilters.c > @@ -154,6 +154,7 @@ void avfilter_register_all(void) > REGISTER_FILTER(COLORMATRIX, colormatrix, vf); > REGISTER_FILTER(CONVOLUTION, convolution, vf); > REGISTER_FILTER(COPY, copy, vf); > + REGISTER_FILTER(COREIMAGE, coreimage, vf); > REGISTER_FILTER(COVER_RECT, cover_rect, vf); > REGISTER_FILTER(CROP, crop, vf); > REGISTER_FILTER(CROPDETECT, cropdetect, vf); > diff --git a/libavfilter/vf_coreimage.m b/libavfilter/vf_coreimage.m > new file mode 100644 > index 0000000..283f62f > --- /dev/null > +++ b/libavfilter/vf_coreimage.m > @@ -0,0 +1,551 @@ > +/* > + * Copyright (c) 2016 Thilo Borgmann > + * > + * This file is part of FFmpeg. > + * > + * FFmpeg is free software; you can redistribute it and/or > + * modify it under the terms of the GNU Lesser General Public > + * License as published by the Free Software Foundation; either > + * version 2.1 of the License, or (at your option) any later version. > + * > + * FFmpeg is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + * Lesser General Public License for more details. > + * > + * You should have received a copy of the GNU Lesser General Public > + * License along with FFmpeg; if not, write to the Free Software > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 > USA > + */ > + > +/** > + * @file > + * Video processing based on Apple's CoreImage API > + */ > + > +#import <QuartzCore/CoreImage.h> > +#import <AppKit/AppKit.h> > + > +#include "avfilter.h" > +#include "formats.h" > +#include "internal.h" > +#include "video.h" > +#include "libavutil/internal.h" > +#include "libavutil/opt.h" > +#include "libavutil/pixdesc.h" > + > +typedef struct CoreImageContext { > + const AVClass *class; > + > + CFTypeRef glctx; ///< OpenGL context > + CGContextRef cgctx; ///< Bitmap context for image copy > + CFTypeRef input_image; ///< Input image container for > passing into Core Image API > + CGColorSpaceRef color_space; ///< Common color space for input > image and cgcontext > + int bits_per_component; ///< Shared bpc for input-output > operation > + > + char *filter_string; ///< The complete user provided > filter definition > + CFTypeRef *filters; ///< CIFilter object for all > requested filters > + int num_filters; ///< Amount of filters in *filters > + > + bool list_filters; ///< Option used to list all > available filters i don't think you can assume that sizeof(int)==sizeof(bool) (AV_OPT_TYPE_BOOL writes to an int). [...] > +/** Get an appropriate video buffer for filter processing. > + */ > +static AVFrame *get_video_buffer(AVFilterLink *link, int w, int h) > +{ > + CoreImageContext *ctx = link->dst->priv; > + AVFrame *frame; > + > + frame = ff_get_video_buffer(link->dst->outputs[0], w, h); > + > + if (!frame) { > + av_log(ctx, AV_LOG_ERROR, "Getting video buffer failed.\n"); > + } > + > + return frame; > +} > + I don't think you need this. > +/** Define input and output formats for this filter. > + */ > +static int query_formats(AVFilterContext *fctx) > +{ > + static const enum AVPixelFormat inout_fmts_rgb[] = { > + AV_PIX_FMT_ARGB, > + AV_PIX_FMT_NONE > + }; > + > + AVFilterFormats *inout_formats; > + int ret; > + > + if (!(inout_formats = ff_make_format_list(inout_fmts_rgb))) { > + return (AVERROR(ENOMEM)); style: useless () > + } > + > + if ((ret = ff_formats_ref(inout_formats, &fctx->inputs[0]->out_formats)) > < 0 || // out > + (ret = ff_formats_ref(inout_formats, &fctx->outputs[0]->in_formats)) > < 0) { // in > + return ret; > + } > + > + return 0; > +} > + > +/** Apply all valid filters successively to the input image. > + * The final output image is copied from the GPU by "drawing" using a > bitmap context. > + */ > +static int filter_frame(AVFilterLink *link, AVFrame *frame) > +{ > + CoreImageContext *ctx = link->dst->priv; > + int i; > + > + // assume one input image and one output image for now > + if (!frame->data[0]) { > + av_log(ctx, AV_LOG_ERROR, "No input image given."); > + return AVERROR(EINVAL); > + } > + can this happen? [...] > + // allocate CIFilter array > + ctx->filters = av_mallocz(ctx->num_filters * sizeof(CIFilter*)); av_mallocz_array() > + if (!ctx->filters) { > + av_log(ctx, AV_LOG_ERROR, "Could not allocate filter array.\n"); > + return AVERROR(ENOMEM); > + } > + > + // parste filters for option key-value pairs (opt=val@opt2=val2) > seperated by @ parse, separated [...] > +/** Uninitialize all filters, contexts and free all allocated memory. > + */ > +static av_cold void uninit(AVFilterContext *fctx) > +{ > +#define SafeCFRelease(ptr) { \ > + if (ptr) { \ > + CFRelease(ptr); \ > + ptr = NULL; \ > + } \ > +} please use do while(0) form > + > + CoreImageContext *ctx = fctx->priv; > + > + SafeCFRelease(ctx->glctx); > + SafeCFRelease(ctx->cgctx); > + SafeCFRelease(ctx->color_space); > + SafeCFRelease(ctx->input_image); > + > + if (ctx->filters) { > + for (int i = 0; i < ctx->num_filters; i++) { > + SafeCFRelease(ctx->filters[i]); > + } > + av_free(ctx->filters); av_freep() > + } > + > +} > + > +static const AVFilterPad avfilter_vf_coreimage_inputs[] = { drop the avfilter_, it's not a public thing. [...] -- Clément B.
signature.asc
Description: PGP signature
_______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel