ffmpeg | branch: master | Danil Iashchenko <danyasche...@gmail.com> | Sun Mar 
16 19:15:08 2025 +0000| [a1c6ca1683708978c24ed8a632bb29fafc9dacdf] | committer: 
Gyan Doshi

doc/filters: Shift CUDA-based filters to own section.

> http://git.videolan.org/gitweb.cgi/ffmpeg.git/?a=commit;h=a1c6ca1683708978c24ed8a632bb29fafc9dacdf
---

 doc/filters.texi | 3229 ++++++++++++++++++++++++++++--------------------------
 1 file changed, 1651 insertions(+), 1578 deletions(-)

diff --git a/doc/filters.texi b/doc/filters.texi
index 0ba7d3035f..37b8674756 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -8619,45 +8619,6 @@ Set planes to filter. Default is first only.
 
 This filter supports the all above options as @ref{commands}.
 
-@section bilateral_cuda
-CUDA accelerated bilateral filter, an edge preserving filter.
-This filter is mathematically accurate thanks to the use of GPU acceleration.
-For best output quality, use one to one chroma subsampling, i.e. yuv444p 
format.
-
-The filter accepts the following options:
-@table @option
-@item sigmaS
-Set sigma of gaussian function to calculate spatial weight, also called sigma 
space.
-Allowed range is 0.1 to 512. Default is 0.1.
-
-@item sigmaR
-Set sigma of gaussian function to calculate color range weight, also called 
sigma color.
-Allowed range is 0.1 to 512. Default is 0.1.
-
-@item window_size
-Set window size of the bilateral function to determine the number of 
neighbours to loop on.
-If the number entered is even, one will be added automatically.
-Allowed range is 1 to 255. Default is 1.
-@end table
-@subsection Examples
-
-@itemize
-@item
-Apply the bilateral filter on a video.
-
-@example
-./ffmpeg -v verbose \
--hwaccel cuda -hwaccel_output_format cuda -i input.mp4  \
--init_hw_device cuda \
--filter_complex \
-" \
-[0:v]scale_cuda=format=yuv444p[scaled_video];
-[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
--an -sn -c:v h264_nvenc -cq 20 out.mp4
-@end example
-
-@end itemize
-
 @section bitplanenoise
 
 Show and measure bit plane noise.
@@ -9243,58 +9204,6 @@ Only deinterlace frames marked as interlaced.
 The default value is @code{all}.
 @end table
 
-@section bwdif_cuda
-
-Deinterlace the input video using the @ref{bwdif} algorithm, but implemented
-in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
-and/or nvenc.
-
-It accepts the following parameters:
-
-@table @option
-@item mode
-The interlacing mode to adopt. It accepts one of the following values:
-
-@table @option
-@item 0, send_frame
-Output one frame for each frame.
-@item 1, send_field
-Output one frame for each field.
-@end table
-
-The default value is @code{send_field}.
-
-@item parity
-The picture field parity assumed for the input interlaced video. It accepts one
-of the following values:
-
-@table @option
-@item 0, tff
-Assume the top field is first.
-@item 1, bff
-Assume the bottom field is first.
-@item -1, auto
-Enable automatic detection of field parity.
-@end table
-
-The default value is @code{auto}.
-If the interlacing is unknown or the decoder does not export this information,
-top field first will be assumed.
-
-@item deint
-Specify which frames to deinterlace. Accepts one of the following
-values:
-
-@table @option
-@item 0, all
-Deinterlace all frames.
-@item 1, interlaced
-Only deinterlace frames marked as interlaced.
-@end table
-
-The default value is @code{all}.
-@end table
-
 @section ccrepack
 
 Repack CEA-708 closed captioning side data
@@ -9408,48 +9317,6 @@ ffmpeg -f lavfi -i color=c=black:s=1280x720 -i video.mp4 
-shortest -filter_compl
 @end example
 @end itemize
 
-@section chromakey_cuda
-CUDA accelerated YUV colorspace color/chroma keying.
-
-This filter works like normal chromakey filter but operates on CUDA frames.
-for more details and parameters see @ref{chromakey}.
-
-@subsection Examples
-
-@itemize
-@item
-Make all the green pixels in the input video transparent and use it as an 
overlay for another video:
-
-@example
-./ffmpeg \
-    -hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4  \
-    -hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
-    -init_hw_device cuda \
-    -filter_complex \
-    " \
-        [0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video]; \
-        [1:v]scale_cuda=format=yuv420p[base]; \
-        [base][overlay_video]overlay_cuda" \
-    -an -sn -c:v h264_nvenc -cq 20 output.mp4
-@end example
-
-@item
-Process two software sources, explicitly uploading the frames:
-
-@example
-./ffmpeg -init_hw_device cuda=cuda -filter_hw_device cuda \
-    -f lavfi -i color=size=800x600:color=white,format=yuv420p \
-    -f lavfi -i yuvtestsrc=size=200x200,format=yuv420p \
-    -filter_complex \
-    " \
-        [0]hwupload[under]; \
-        [1]hwupload,chromakey_cuda=green:0.1:0.12[over]; \
-        [under][over]overlay_cuda" \
-    -c:v hevc_nvenc -cq 18 -preset slow output.mp4
-@end example
-
-@end itemize
-
 @section chromanr
 Reduce chrominance noise.
 
@@ -10427,38 +10294,6 @@ For example to convert the input to SMPTE-240M, use 
the command:
 colorspace=smpte240m
 @end example
 
-@section colorspace_cuda
-
-CUDA accelerated implementation of the colorspace filter.
-
-It is by no means feature complete compared to the software colorspace filter,
-and at the current time only supports color range conversion between jpeg/full
-and mpeg/limited range.
-
-The filter accepts the following options:
-
-@table @option
-@item range
-Specify output color range.
-
-The accepted values are:
-@table @samp
-@item tv
-TV (restricted) range
-
-@item mpeg
-MPEG (restricted) range
-
-@item pc
-PC (full) range
-
-@item jpeg
-JPEG (full) range
-
-@end table
-
-@end table
-
 @section colortemperature
 Adjust color temperature in video to simulate variations in ambient color 
temperature.
 
@@ -18988,84 +18823,6 @@ testsrc=s=100x100, split=4 [in0][in1][in2][in3];
 
 @end itemize
 
-@anchor{overlay_cuda}
-@section overlay_cuda
-
-Overlay one video on top of another.
-
-This is the CUDA variant of the @ref{overlay} filter.
-It only accepts CUDA frames. The underlying input pixel formats have to match.
-
-It takes two inputs and has one output. The first input is the "main"
-video on which the second input is overlaid.
-
-It accepts the following parameters:
-
-@table @option
-@item x
-@item y
-Set expressions for the x and y coordinates of the overlaid video
-on the main video.
-
-They can contain the following parameters:
-
-@table @option
-
-@item main_w, W
-@item main_h, H
-The main input width and height.
-
-@item overlay_w, w
-@item overlay_h, h
-The overlay input width and height.
-
-@item x
-@item y
-The computed values for @var{x} and @var{y}. They are evaluated for
-each new frame.
-
-@item n
-The ordinal index of the main input frame, starting from 0.
-
-@item pos
-The byte offset position in the file of the main input frame, NAN if unknown.
-Deprecated, do not use.
-
-@item t
-The timestamp of the main input frame, expressed in seconds, NAN if unknown.
-
-@end table
-
-Default value is "0" for both expressions.
-
-@item eval
-Set when the expressions for @option{x} and @option{y} are evaluated.
-
-It accepts the following values:
-@table @option
-@item init
-Evaluate expressions once during filter initialization or
-when a command is processed.
-
-@item frame
-Evaluate expressions for each incoming frame
-@end table
-
-Default value is @option{frame}.
-
-@item eof_action
-See @ref{framesync}.
-
-@item shortest
-See @ref{framesync}.
-
-@item repeatlast
-See @ref{framesync}.
-
-@end table
-
-This filter also supports the @ref{framesync} options.
-
 @section owdenoise
 
 Apply Overcomplete Wavelet denoiser.
@@ -21516,11 +21273,9 @@ If the specified expression is not valid, it is kept 
at its current
 value.
 @end table
 
-@anchor{scale_cuda}
-@section scale_cuda
+@section scale_vt
 
-Scale (resize) and convert (pixel format) the input video, using accelerated 
CUDA kernels.
-Setting the output width and height works in the same way as for the 
@ref{scale} filter.
+Scale and convert the color parameters using VTPixelTransferSession.
 
 The filter accepts the following options:
 @table @option
@@ -21528,981 +21283,685 @@ The filter accepts the following options:
 @item h
 Set the output video dimension expression. Default value is the input 
dimension.
 
-Allows for the same expressions as the @ref{scale} filter.
+@item color_matrix
+Set the output colorspace matrix.
 
-@item interp_algo
-Sets the algorithm used for scaling:
+@item color_primaries
+Set the output color primaries.
 
-@table @var
-@item nearest
-Nearest neighbour
+@item color_transfer
+Set the output transfer characteristics.
 
-Used by default if input parameters match the desired output.
+@end table
 
-@item bilinear
-Bilinear
+@section scharr
+Apply scharr operator to input video stream.
 
-@item bicubic
-Bicubic
+The filter accepts the following option:
 
-This is the default.
+@table @option
+@item planes
+Set which planes will be processed, unprocessed planes will be copied.
+By default value 0xf, all planes will be processed.
 
-@item lanczos
-Lanczos
+@item scale
+Set value which will be multiplied with filtered result.
 
+@item delta
+Set value which will be added to filtered result.
 @end table
 
-@item format
-Controls the output pixel format. By default, or if none is specified, the 
input
-pixel format is used.
-
-The filter does not support converting between YUV and RGB pixel formats.
-
-@item passthrough
-If set to 0, every frame is processed, even if no conversion is necessary.
-This mode can be useful to use the filter as a buffer for a downstream
-frame-consumer that exhausts the limited decoder frame pool.
+@subsection Commands
 
-If set to 1, frames are passed through as-is if they match the desired output
-parameters. This is the default behaviour.
+This filter supports the all above options as @ref{commands}.
 
-@item param
-Algorithm-Specific parameter.
+@section scroll
+Scroll input video horizontally and/or vertically by constant speed.
 
-Affects the curves of the bicubic algorithm.
+The filter accepts the following options:
+@table @option
+@item horizontal, h
+Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 
1.
+Negative values changes scrolling direction.
 
-@item force_original_aspect_ratio
-@item force_divisible_by
-Work the same as the identical @ref{scale} filter options.
+@item vertical, v
+Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1.
+Negative values changes scrolling direction.
 
-@item reset_sar
-Works the same as the identical @ref{scale} filter option.
+@item hpos
+Set the initial horizontal scrolling position. Default is 0. Allowed range is 
from 0 to 1.
 
+@item vpos
+Set the initial vertical scrolling position. Default is 0. Allowed range is 
from 0 to 1.
 @end table
 
-@subsection Examples
+@subsection Commands
 
-@itemize
-@item
-Scale input to 720p, keeping aspect ratio and ensuring the output is yuv420p.
-@example
-scale_cuda=-2:720:format=yuv420p
-@end example
+This filter supports the following @ref{commands}:
+@table @option
+@item horizontal, h
+Set the horizontal scrolling speed.
+@item vertical, v
+Set the vertical scrolling speed.
+@end table
 
-@item
-Upscale to 4K using nearest neighbour algorithm.
-@example
-scale_cuda=4096:2160:interp_algo=nearest
-@end example
+@anchor{scdet}
+@section scdet
 
-@item
-Don't do any conversion or scaling, but copy all input frames into newly 
allocated ones.
-This can be useful to deal with a filter and encode chain that otherwise 
exhausts the
-decoders frame pool.
-@example
-scale_cuda=passthrough=0
-@end example
-@end itemize
+Detect video scene change.
 
-@anchor{scale_npp}
-@section scale_npp
+This filter sets frame metadata with mafd between frame, the scene score, and
+forward the frame to the next filter, so they can use these metadata to detect
+scene change or others.
 
-Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel
-format conversion on CUDA video frames. Setting the output width and height
-works in the same way as for the @var{scale} filter.
+In addition, this filter logs a message and sets frame metadata when it detects
+a scene change by @option{threshold}.
 
-The following additional options are accepted:
-@table @option
-@item format
-The pixel format of the output CUDA frames. If set to the string "same" (the
-default), the input format will be kept. Note that automatic format negotiation
-and conversion is not yet supported for hardware frames
+@code{lavfi.scd.mafd} metadata keys are set with mafd for every frame.
 
-@item interp_algo
-The interpolation algorithm used for resizing. One of the following:
-@table @option
-@item nn
-Nearest neighbour.
+@code{lavfi.scd.score} metadata keys are set with scene change score for every 
frame
+to detect scene change.
 
-@item linear
-@item cubic
-@item cubic2p_bspline
-2-parameter cubic (B=1, C=0)
+@code{lavfi.scd.time} metadata keys are set with current filtered frame time 
which
+detect scene change with @option{threshold}.
 
-@item cubic2p_catmullrom
-2-parameter cubic (B=0, C=1/2)
+The filter accepts the following options:
 
-@item cubic2p_b05c03
-2-parameter cubic (B=1/2, C=3/10)
+@table @option
+@item threshold, t
+Set the scene change detection threshold as a percentage of maximum change. 
Good
+values are in the @code{[8.0, 14.0]} range. The range for @option{threshold} is
+@code{[0., 100.]}.
 
-@item super
-Supersampling
+Default value is @code{10.}.
 
-@item lanczos
+@item sc_pass, s
+Set the flag to pass scene change frames to the next filter. Default value is 
@code{0}
+You can enable it if you want to get snapshot of scene change frames only.
 @end table
 
-@item force_original_aspect_ratio
-Enable decreasing or increasing output video width or height if necessary to
-keep the original aspect ratio. Possible values:
+@anchor{selectivecolor}
+@section selectivecolor
 
-@table @samp
-@item disable
-Scale the video as specified and disable this feature.
+Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such
+as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined
+by the "purity" of the color (that is, how saturated it already is).
 
-@item decrease
-The output video dimensions will automatically be decreased if needed.
+This filter is similar to the Adobe Photoshop Selective Color tool.
 
-@item increase
-The output video dimensions will automatically be increased if needed.
+The filter accepts the following options:
+
+@table @option
+@item correction_method
+Select color correction method.
 
+Available values are:
+@table @samp
+@item absolute
+Specified adjustments are applied "as-is" (added/subtracted to original pixel
+component value).
+@item relative
+Specified adjustments are relative to the original component value.
+@end table
+Default is @code{absolute}.
+@item reds
+Adjustments for red pixels (pixels where the red component is the maximum)
+@item yellows
+Adjustments for yellow pixels (pixels where the blue component is the minimum)
+@item greens
+Adjustments for green pixels (pixels where the green component is the maximum)
+@item cyans
+Adjustments for cyan pixels (pixels where the red component is the minimum)
+@item blues
+Adjustments for blue pixels (pixels where the blue component is the maximum)
+@item magentas
+Adjustments for magenta pixels (pixels where the green component is the 
minimum)
+@item whites
+Adjustments for white pixels (pixels where all components are greater than 128)
+@item neutrals
+Adjustments for all pixels except pure black and pure white
+@item blacks
+Adjustments for black pixels (pixels where all components are lesser than 128)
+@item psfile
+Specify a Photoshop selective color file (@code{.asv}) to import the settings 
from.
 @end table
 
-One useful instance of this option is that when you know a specific device's
-maximum allowed resolution, you can use this to limit the output video to
-that, while retaining the aspect ratio. For example, device A allows
-1280x720 playback, and your video is 1920x800. Using this option (set it to
-decrease) and specifying 1280x720 to the command line makes the output
-1280x533.
+All the adjustment settings (@option{reds}, @option{yellows}, ...) accept up to
+4 space separated floating point adjustment values in the [-1,1] range,
+respectively to adjust the amount of cyan, magenta, yellow and black for the
+pixels of its range.
 
-Please note that this is a different thing than specifying -1 for @option{w}
-or @option{h}, you still need to specify the output resolution for this option
-to work.
+@subsection Examples
 
-@item force_divisible_by
-Ensures that both the output dimensions, width and height, are divisible by the
-given integer when used together with @option{force_original_aspect_ratio}. 
This
-works similar to using @code{-n} in the @option{w} and @option{h} options.
+@itemize
+@item
+Increase cyan by 50% and reduce yellow by 33% in every green areas, and
+increase magenta by 27% in blue areas:
+@example
+selectivecolor=greens=.5 0 -.33 0:blues=0 .27
+@end example
 
-This option respects the value set for @option{force_original_aspect_ratio},
-increasing or decreasing the resolution accordingly. The video's aspect ratio
-may be slightly modified.
+@item
+Use a Photoshop selective color preset:
+@example
+selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
+@end example
+@end itemize
 
-This option can be handy if you need to have a video fit within or exceed
-a defined resolution using @option{force_original_aspect_ratio} but also have
-encoder restrictions on width or height divisibility.
+@anchor{separatefields}
+@section separatefields
 
-@item reset_sar
-Works the same as the identical @ref{scale} filter option.
+The @code{separatefields} takes a frame-based video input and splits
+each frame into its components fields, producing a new half height clip
+with twice the frame rate and twice the frame count.
 
-@item eval
-Specify when to evaluate @var{width} and @var{height} expression. It accepts 
the following values:
+This filter use field-dominance information in frame to decide which
+of each pair of fields to place first in the output.
+If it gets it wrong use @ref{setfield} filter before @code{separatefields} 
filter.
 
-@table @samp
-@item init
-Only evaluate expressions once during the filter initialization or when a 
command is processed.
+@section setdar, setsar
 
-@item frame
-Evaluate expressions for each incoming frame.
+The @code{setdar} filter sets the Display Aspect Ratio for the filter
+output video.
 
-@end table
+This is done by changing the specified Sample (aka Pixel) Aspect
+Ratio, according to the following equation:
+@example
+@var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
+@end example
 
-@end table
+Keep in mind that the @code{setdar} filter does not modify the pixel
+dimensions of the video frame. Also, the display aspect ratio set by
+this filter may be changed by later filters in the filterchain,
+e.g. in case of scaling or if another "setdar" or a "setsar" filter is
+applied.
 
-The values of the @option{w} and @option{h} options are expressions
-containing the following constants:
+The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
+the filter output video.
 
-@table @var
-@item in_w
-@item in_h
-The input width and height
-
-@item iw
-@item ih
-These are the same as @var{in_w} and @var{in_h}.
-
-@item out_w
-@item out_h
-The output (scaled) width and height
-
-@item ow
-@item oh
-These are the same as @var{out_w} and @var{out_h}
+Note that as a consequence of the application of this filter, the
+output display aspect ratio will change according to the equation
+above.
 
-@item a
-The same as @var{iw} / @var{ih}
+Keep in mind that the sample aspect ratio set by the @code{setsar}
+filter may be changed by later filters in the filterchain, e.g. if
+another "setsar" or a "setdar" filter is applied.
 
-@item sar
-input sample aspect ratio
+It accepts the following parameters:
 
-@item dar
-The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
+@table @option
+@item r, ratio, dar (@code{setdar} only), sar (@code{setsar} only)
+Set the aspect ratio used by the filter.
 
-@item n
-The (sequential) number of the input frame, starting from 0.
-Only available with @code{eval=frame}.
+The parameter can be a floating point number string, or an expression. If the
+parameter is not specified, the value "0" is assumed, meaning that the same
+input value is used.
 
-@item t
-The presentation timestamp of the input frame, expressed as a number of
-seconds. Only available with @code{eval=frame}.
+@item max
+Set the maximum integer value to use for expressing numerator and
+denominator when reducing the expressed aspect ratio to a rational.
+Default value is @code{100}.
 
-@item pos
-The position (byte offset) of the frame in the input stream, or NaN if
-this information is unavailable and/or meaningless (for example in case of 
synthetic video).
-Only available with @code{eval=frame}.
-Deprecated, do not use.
 @end table
 
-@section scale2ref_npp
-
-Use the NVIDIA Performance Primitives (libnpp) to scale (resize) the input
-video, based on a reference video.
-
-See the @ref{scale_npp} filter for available options, scale2ref_npp supports 
the same
-but uses the reference video instead of the main input as basis. scale2ref_npp
-also supports the following additional constants for the @option{w} and
-@option{h} options:
-
-@table @var
-@item main_w
-@item main_h
-The main input video's width and height
-
-@item main_a
-The same as @var{main_w} / @var{main_h}
+The parameter @var{sar} is an expression containing the following constants:
 
-@item main_sar
-The main input video's sample aspect ratio
+@table @option
+@item w, h
+The input width and height.
 
-@item main_dar, mdar
-The main input video's display aspect ratio. Calculated from
-@code{(main_w / main_h) * main_sar}.
+@item a
+Same as @var{w} / @var{h}.
 
-@item main_n
-The (sequential) number of the main input frame, starting from 0.
-Only available with @code{eval=frame}.
+@item sar
+The input sample aspect ratio.
 
-@item main_t
-The presentation timestamp of the main input frame, expressed as a number of
-seconds. Only available with @code{eval=frame}.
+@item dar
+The input display aspect ratio. It is the same as
+(@var{w} / @var{h}) * @var{sar}.
 
-@item main_pos
-The position (byte offset) of the frame in the main input stream, or NaN if
-this information is unavailable and/or meaningless (for example in case of 
synthetic video).
-Only available with @code{eval=frame}.
+@item hsub, vsub
+Horizontal and vertical chroma subsample values. For example, for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
 @end table
 
 @subsection Examples
 
 @itemize
+
 @item
-Scale a subtitle stream (b) to match the main video (a) in size before 
overlaying
+To change the display aspect ratio to 16:9, specify one of the following:
 @example
-'scale2ref_npp[b][a];[a][b]overlay_cuda'
+setdar=dar=1.77777
+setdar=dar=16/9
 @end example
 
 @item
-Scale a logo to 1/10th the height of a video, while preserving its display 
aspect ratio.
+To change the sample aspect ratio to 10:11, specify:
 @example
-[logo-in][video-in]scale2ref_npp=w=oh*mdar:h=ih/10[logo-out][video-out]
+setsar=sar=10/11
 @end example
-@end itemize
 
-@section scale_vt
+@item
+To set a display aspect ratio of 16:9, and specify a maximum integer value of
+1000 in the aspect ratio reduction, use the command:
+@example
+setdar=ratio=16/9:max=1000
+@end example
 
-Scale and convert the color parameters using VTPixelTransferSession.
+@end itemize
 
-The filter accepts the following options:
-@table @option
-@item w
-@item h
-Set the output video dimension expression. Default value is the input 
dimension.
+@anchor{setfield}
+@section setfield
 
-@item color_matrix
-Set the output colorspace matrix.
+Force field for the output video frame.
 
-@item color_primaries
-Set the output color primaries.
+The @code{setfield} filter marks the interlace type field for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+following filters (e.g. @code{fieldorder} or @code{yadif}).
 
-@item color_transfer
-Set the output transfer characteristics.
+The filter accepts the following options:
 
-@end table
+@table @option
 
-@section scharr
-Apply scharr operator to input video stream.
+@item mode
+Available values are:
 
-The filter accepts the following option:
+@table @samp
+@item auto
+Keep the same field property.
 
-@table @option
-@item planes
-Set which planes will be processed, unprocessed planes will be copied.
-By default value 0xf, all planes will be processed.
+@item bff
+Mark the frame as bottom-field-first.
 
-@item scale
-Set value which will be multiplied with filtered result.
+@item tff
+Mark the frame as top-field-first.
 
-@item delta
-Set value which will be added to filtered result.
+@item prog
+Mark the frame as progressive.
+@end table
 @end table
 
-@subsection Commands
+@anchor{setparams}
+@section setparams
 
-This filter supports the all above options as @ref{commands}.
+Force frame parameter for the output video frame.
 
-@section scroll
-Scroll input video horizontally and/or vertically by constant speed.
+The @code{setparams} filter marks interlace and color range for the
+output frames. It does not change the input frame, but only sets the
+corresponding property, which affects how the frame is treated by
+filters/encoders.
 
-The filter accepts the following options:
 @table @option
-@item horizontal, h
-Set the horizontal scrolling speed. Default is 0. Allowed range is from -1 to 
1.
-Negative values changes scrolling direction.
-
-@item vertical, v
-Set the vertical scrolling speed. Default is 0. Allowed range is from -1 to 1.
-Negative values changes scrolling direction.
+@item field_mode
+Available values are:
 
-@item hpos
-Set the initial horizontal scrolling position. Default is 0. Allowed range is 
from 0 to 1.
+@table @samp
+@item auto
+Keep the same field property (default).
 
-@item vpos
-Set the initial vertical scrolling position. Default is 0. Allowed range is 
from 0 to 1.
-@end table
+@item bff
+Mark the frame as bottom-field-first.
 
-@subsection Commands
+@item tff
+Mark the frame as top-field-first.
 
-This filter supports the following @ref{commands}:
-@table @option
-@item horizontal, h
-Set the horizontal scrolling speed.
-@item vertical, v
-Set the vertical scrolling speed.
+@item prog
+Mark the frame as progressive.
 @end table
 
-@anchor{scdet}
-@section scdet
-
-Detect video scene change.
-
-This filter sets frame metadata with mafd between frame, the scene score, and
-forward the frame to the next filter, so they can use these metadata to detect
-scene change or others.
-
-In addition, this filter logs a message and sets frame metadata when it detects
-a scene change by @option{threshold}.
+@item range
+Available values are:
 
-@code{lavfi.scd.mafd} metadata keys are set with mafd for every frame.
+@table @samp
+@item auto
+Keep the same color range property (default).
 
-@code{lavfi.scd.score} metadata keys are set with scene change score for every 
frame
-to detect scene change.
+@item unspecified, unknown
+Mark the frame as unspecified color range.
 
-@code{lavfi.scd.time} metadata keys are set with current filtered frame time 
which
-detect scene change with @option{threshold}.
+@item limited, tv, mpeg
+Mark the frame as limited range.
 
-The filter accepts the following options:
+@item full, pc, jpeg
+Mark the frame as full range.
+@end table
 
-@table @option
-@item threshold, t
-Set the scene change detection threshold as a percentage of maximum change. 
Good
-values are in the @code{[8.0, 14.0]} range. The range for @option{threshold} is
-@code{[0., 100.]}.
+@item color_primaries
+Set the color primaries.
+Available values are:
 
-Default value is @code{10.}.
+@table @samp
+@item auto
+Keep the same color primaries property (default).
 
-@item sc_pass, s
-Set the flag to pass scene change frames to the next filter. Default value is 
@code{0}
-You can enable it if you want to get snapshot of scene change frames only.
+@item bt709
+@item unknown
+@item bt470m
+@item bt470bg
+@item smpte170m
+@item smpte240m
+@item film
+@item bt2020
+@item smpte428
+@item smpte431
+@item smpte432
+@item jedec-p22
 @end table
 
-@anchor{selectivecolor}
-@section selectivecolor
+@item color_trc
+Set the color transfer.
+Available values are:
 
-Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such
-as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined
-by the "purity" of the color (that is, how saturated it already is).
+@table @samp
+@item auto
+Keep the same color trc property (default).
 
-This filter is similar to the Adobe Photoshop Selective Color tool.
+@item bt709
+@item unknown
+@item bt470m
+@item bt470bg
+@item smpte170m
+@item smpte240m
+@item linear
+@item log100
+@item log316
+@item iec61966-2-4
+@item bt1361e
+@item iec61966-2-1
+@item bt2020-10
+@item bt2020-12
+@item smpte2084
+@item smpte428
+@item arib-std-b67
+@end table
 
-The filter accepts the following options:
+@item colorspace
+Set the colorspace.
+Available values are:
 
-@table @option
-@item correction_method
-Select color correction method.
+@table @samp
+@item auto
+Keep the same colorspace property (default).
+
+@item gbr
+@item bt709
+@item unknown
+@item fcc
+@item bt470bg
+@item smpte170m
+@item smpte240m
+@item ycgco
+@item bt2020nc
+@item bt2020c
+@item smpte2085
+@item chroma-derived-nc
+@item chroma-derived-c
+@item ictcp
+@end table
 
+@item chroma_location
+Set the chroma sample location.
 Available values are:
+
 @table @samp
-@item absolute
-Specified adjustments are applied "as-is" (added/subtracted to original pixel
-component value).
-@item relative
-Specified adjustments are relative to the original component value.
+@item auto
+Keep the same chroma location (default).
+
+@item unspecified, unknown
+@item left
+@item center
+@item topleft
+@item top
+@item bottomleft
+@item bottom
 @end table
-Default is @code{absolute}.
-@item reds
-Adjustments for red pixels (pixels where the red component is the maximum)
-@item yellows
-Adjustments for yellow pixels (pixels where the blue component is the minimum)
-@item greens
-Adjustments for green pixels (pixels where the green component is the maximum)
-@item cyans
-Adjustments for cyan pixels (pixels where the red component is the minimum)
-@item blues
-Adjustments for blue pixels (pixels where the blue component is the maximum)
-@item magentas
-Adjustments for magenta pixels (pixels where the green component is the 
minimum)
-@item whites
-Adjustments for white pixels (pixels where all components are greater than 128)
-@item neutrals
-Adjustments for all pixels except pure black and pure white
-@item blacks
-Adjustments for black pixels (pixels where all components are lesser than 128)
-@item psfile
-Specify a Photoshop selective color file (@code{.asv}) to import the settings 
from.
 @end table
 
-All the adjustment settings (@option{reds}, @option{yellows}, ...) accept up to
-4 space separated floating point adjustment values in the [-1,1] range,
-respectively to adjust the amount of cyan, magenta, yellow and black for the
-pixels of its range.
+@section shear
+Apply shear transform to input video.
 
-@subsection Examples
+This filter supports the following options:
 
-@itemize
-@item
-Increase cyan by 50% and reduce yellow by 33% in every green areas, and
-increase magenta by 27% in blue areas:
-@example
-selectivecolor=greens=.5 0 -.33 0:blues=0 .27
-@end example
+@table @option
+@item shx
+Shear factor in X-direction. Default value is 0.
+Allowed range is from -2 to 2.
 
-@item
-Use a Photoshop selective color preset:
-@example
-selectivecolor=psfile=MySelectiveColorPresets/Misty.asv
-@end example
-@end itemize
+@item shy
+Shear factor in Y-direction. Default value is 0.
+Allowed range is from -2 to 2.
 
-@anchor{separatefields}
-@section separatefields
+@item fillcolor, c
+Set the color used to fill the output area not covered by the transformed
+video. For the general syntax of this option, check the
+@ref{color syntax,,"Color" section in the ffmpeg-utils manual,ffmpeg-utils}.
+If the special value "none" is selected then no
+background is printed (useful for example if the background is never shown).
 
-The @code{separatefields} takes a frame-based video input and splits
-each frame into its components fields, producing a new half height clip
-with twice the frame rate and twice the frame count.
+Default value is "black".
 
-This filter use field-dominance information in frame to decide which
-of each pair of fields to place first in the output.
-If it gets it wrong use @ref{setfield} filter before @code{separatefields} 
filter.
+@item interp
+Set interpolation type. Can be @code{bilinear} or @code{nearest}. Default is 
@code{bilinear}.
+@end table
 
-@section setdar, setsar
+@subsection Commands
 
-The @code{setdar} filter sets the Display Aspect Ratio for the filter
-output video.
+This filter supports the all above options as @ref{commands}.
 
-This is done by changing the specified Sample (aka Pixel) Aspect
-Ratio, according to the following equation:
-@example
-@var{DAR} = @var{HORIZONTAL_RESOLUTION} / @var{VERTICAL_RESOLUTION} * @var{SAR}
-@end example
+@section showinfo
 
-Keep in mind that the @code{setdar} filter does not modify the pixel
-dimensions of the video frame. Also, the display aspect ratio set by
-this filter may be changed by later filters in the filterchain,
-e.g. in case of scaling or if another "setdar" or a "setsar" filter is
-applied.
+Show a line containing various information for each input video frame.
+The input video is not modified.
 
-The @code{setsar} filter sets the Sample (aka Pixel) Aspect Ratio for
-the filter output video.
+This filter supports the following options:
 
-Note that as a consequence of the application of this filter, the
-output display aspect ratio will change according to the equation
-above.
+@table @option
+@item checksum
+Calculate checksums of each plane. By default enabled.
 
-Keep in mind that the sample aspect ratio set by the @code{setsar}
-filter may be changed by later filters in the filterchain, e.g. if
-another "setsar" or a "setdar" filter is applied.
+@item udu_sei_as_ascii
+Try to print user data unregistered SEI as ascii character when possible,
+in hex format otherwise.
+@end table
 
-It accepts the following parameters:
+The shown line contains a sequence of key/value pairs of the form
+@var{key}:@var{value}.
+
+The following values are shown in the output:
 
 @table @option
-@item r, ratio, dar (@code{setdar} only), sar (@code{setsar} only)
-Set the aspect ratio used by the filter.
+@item n
+The (sequential) number of the input frame, starting from 0.
 
-The parameter can be a floating point number string, or an expression. If the
-parameter is not specified, the value "0" is assumed, meaning that the same
-input value is used.
+@item pts
+The Presentation TimeStamp of the input frame, expressed as a number of
+time base units. The time base unit depends on the filter input pad.
 
-@item max
-Set the maximum integer value to use for expressing numerator and
-denominator when reducing the expressed aspect ratio to a rational.
-Default value is @code{100}.
+@item pts_time
+The Presentation TimeStamp of the input frame, expressed as a number of
+seconds.
 
-@end table
+@item fmt
+The pixel format name.
 
-The parameter @var{sar} is an expression containing the following constants:
+@item sar
+The sample aspect ratio of the input frame, expressed in the form
+@var{num}/@var{den}.
 
-@table @option
-@item w, h
-The input width and height.
+@item s
+The size of the input frame. For the syntax of this option, check the
+@ref{video size syntax,,"Video size" section in the ffmpeg-utils 
manual,ffmpeg-utils}.
 
-@item a
-Same as @var{w} / @var{h}.
+@item i
+The type of interlaced mode ("P" for "progressive", "T" for top field first, 
"B"
+for bottom field first).
 
-@item sar
-The input sample aspect ratio.
+@item iskey
+This is 1 if the frame is a key frame, 0 otherwise.
 
-@item dar
-The input display aspect ratio. It is the same as
-(@var{w} / @var{h}) * @var{sar}.
+@item type
+The picture type of the input frame ("I" for an I-frame, "P" for a
+P-frame, "B" for a B-frame, or "?" for an unknown type).
+Also refer to the documentation of the @code{AVPictureType} enum and of
+the @code{av_get_picture_type_char} function defined in
+@file{libavutil/avutil.h}.
 
-@item hsub, vsub
-Horizontal and vertical chroma subsample values. For example, for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
-@end table
+@item checksum
+The Adler-32 checksum (printed in hexadecimal) of all the planes of the input 
frame.
 
-@subsection Examples
+@item plane_checksum
+The Adler-32 checksum (printed in hexadecimal) of each plane of the input 
frame,
+expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
 
-@itemize
+@item mean
+The mean value of pixels in each plane of the input frame, expressed in the 
form
+"[@var{mean0} @var{mean1} @var{mean2} @var{mean3}]".
 
-@item
-To change the display aspect ratio to 16:9, specify one of the following:
-@example
-setdar=dar=1.77777
-setdar=dar=16/9
-@end example
+@item stdev
+The standard deviation of pixel values in each plane of the input frame, 
expressed
+in the form "[@var{stdev0} @var{stdev1} @var{stdev2} @var{stdev3}]".
 
-@item
-To change the sample aspect ratio to 10:11, specify:
-@example
-setsar=sar=10/11
-@end example
-
-@item
-To set a display aspect ratio of 16:9, and specify a maximum integer value of
-1000 in the aspect ratio reduction, use the command:
-@example
-setdar=ratio=16/9:max=1000
-@end example
-
-@end itemize
-
-@anchor{setfield}
-@section setfield
+@end table
 
-Force field for the output video frame.
+@section showpalette
 
-The @code{setfield} filter marks the interlace type field for the
-output frames. It does not change the input frame, but only sets the
-corresponding property, which affects how the frame is treated by
-following filters (e.g. @code{fieldorder} or @code{yadif}).
+Displays the 256 colors palette of each frame. This filter is only relevant for
+@var{pal8} pixel format frames.
 
-The filter accepts the following options:
+It accepts the following option:
 
 @table @option
+@item s
+Set the size of the box used to represent one palette color entry. Default is
+@code{30} (for a @code{30x30} pixel box).
+@end table
 
-@item mode
-Available values are:
-
-@table @samp
-@item auto
-Keep the same field property.
+@section shuffleframes
 
-@item bff
-Mark the frame as bottom-field-first.
+Reorder and/or duplicate and/or drop video frames.
 
-@item tff
-Mark the frame as top-field-first.
+It accepts the following parameters:
 
-@item prog
-Mark the frame as progressive.
-@end table
+@table @option
+@item mapping
+Set the destination indexes of input frames.
+This is space or '|' separated list of indexes that maps input frames to output
+frames. Number of indexes also sets maximal value that each index may have.
+'-1' index have special meaning and that is to drop frame.
 @end table
 
-@anchor{setparams}
-@section setparams
-
-Force frame parameter for the output video frame.
-
-The @code{setparams} filter marks interlace and color range for the
-output frames. It does not change the input frame, but only sets the
-corresponding property, which affects how the frame is treated by
-filters/encoders.
+The first frame has the index 0. The default is to keep the input unchanged.
 
-@table @option
-@item field_mode
-Available values are:
+@subsection Examples
 
-@table @samp
-@item auto
-Keep the same field property (default).
+@itemize
+@item
+Swap second and third frame of every three frames of the input:
+@example
+ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
+@end example
 
-@item bff
-Mark the frame as bottom-field-first.
+@item
+Swap 10th and 1st frame of every ten frames of the input:
+@example
+ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
+@end example
+@end itemize
 
-@item tff
-Mark the frame as top-field-first.
+@section shufflepixels
 
-@item prog
-Mark the frame as progressive.
-@end table
+Reorder pixels in video frames.
 
-@item range
-Available values are:
+This filter accepts the following options:
 
-@table @samp
-@item auto
-Keep the same color range property (default).
+@table @option
+@item direction, d
+Set shuffle direction. Can be forward or inverse direction.
+Default direction is forward.
 
-@item unspecified, unknown
-Mark the frame as unspecified color range.
+@item mode, m
+Set shuffle mode. Can be horizontal, vertical or block mode.
 
-@item limited, tv, mpeg
-Mark the frame as limited range.
+@item width, w
+@item height, h
+Set shuffle block_size. In case of horizontal shuffle mode only width
+part of size is used, and in case of vertical shuffle mode only height
+part of size is used.
 
-@item full, pc, jpeg
-Mark the frame as full range.
+@item seed, s
+Set random seed used with shuffling pixels. Mainly useful to set to be able
+to reverse filtering process to get original input.
+For example, to reverse forward shuffle you need to use same parameters
+and exact same seed and to set direction to inverse.
 @end table
 
-@item color_primaries
-Set the color primaries.
-Available values are:
+@section shuffleplanes
 
-@table @samp
-@item auto
-Keep the same color primaries property (default).
+Reorder and/or duplicate video planes.
 
-@item bt709
-@item unknown
-@item bt470m
-@item bt470bg
-@item smpte170m
-@item smpte240m
-@item film
-@item bt2020
-@item smpte428
-@item smpte431
-@item smpte432
-@item jedec-p22
-@end table
+It accepts the following parameters:
 
-@item color_trc
-Set the color transfer.
-Available values are:
+@table @option
 
-@table @samp
-@item auto
-Keep the same color trc property (default).
+@item map0
+The index of the input plane to be used as the first output plane.
 
-@item bt709
-@item unknown
-@item bt470m
-@item bt470bg
-@item smpte170m
-@item smpte240m
-@item linear
-@item log100
-@item log316
-@item iec61966-2-4
-@item bt1361e
-@item iec61966-2-1
-@item bt2020-10
-@item bt2020-12
-@item smpte2084
-@item smpte428
-@item arib-std-b67
-@end table
+@item map1
+The index of the input plane to be used as the second output plane.
 
-@item colorspace
-Set the colorspace.
-Available values are:
+@item map2
+The index of the input plane to be used as the third output plane.
 
-@table @samp
-@item auto
-Keep the same colorspace property (default).
+@item map3
+The index of the input plane to be used as the fourth output plane.
 
-@item gbr
-@item bt709
-@item unknown
-@item fcc
-@item bt470bg
-@item smpte170m
-@item smpte240m
-@item ycgco
-@item bt2020nc
-@item bt2020c
-@item smpte2085
-@item chroma-derived-nc
-@item chroma-derived-c
-@item ictcp
 @end table
 
-@item chroma_location
-Set the chroma sample location.
-Available values are:
+The first plane has the index 0. The default is to keep the input unchanged.
 
-@table @samp
-@item auto
-Keep the same chroma location (default).
+@subsection Examples
 
-@item unspecified, unknown
-@item left
-@item center
-@item topleft
-@item top
-@item bottomleft
-@item bottom
-@end table
-@end table
+@itemize
+@item
+Swap the second and third planes of the input:
+@example
+ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
+@end example
+@end itemize
 
-@section sharpen_npp
-Use the NVIDIA Performance Primitives (libnpp) to perform image sharpening with
-border control.
+@anchor{signalstats}
+@section signalstats
+Evaluate various visual metrics that assist in determining issues associated
+with the digitization of analog video media.
 
-The following additional options are accepted:
-@table @option
+By default the filter will log these metadata values:
 
-@item border_type
-Type of sampling to be used ad frame borders. One of the following:
 @table @option
+@item YMIN
+Display the minimal Y value contained within the input frame. Expressed in
+range of [0-255].
 
-@item replicate
-Replicate pixel values.
+@item YLOW
+Display the Y value at the 10% percentile within the input frame. Expressed in
+range of [0-255].
 
-@end table
-@end table
+@item YAVG
+Display the average Y value within the input frame. Expressed in range of
+[0-255].
 
-@section shear
-Apply shear transform to input video.
+@item YHIGH
+Display the Y value at the 90% percentile within the input frame. Expressed in
+range of [0-255].
 
-This filter supports the following options:
+@item YMAX
+Display the maximum Y value contained within the input frame. Expressed in
+range of [0-255].
 
-@table @option
-@item shx
-Shear factor in X-direction. Default value is 0.
-Allowed range is from -2 to 2.
-
-@item shy
-Shear factor in Y-direction. Default value is 0.
-Allowed range is from -2 to 2.
-
-@item fillcolor, c
-Set the color used to fill the output area not covered by the transformed
-video. For the general syntax of this option, check the
-@ref{color syntax,,"Color" section in the ffmpeg-utils manual,ffmpeg-utils}.
-If the special value "none" is selected then no
-background is printed (useful for example if the background is never shown).
-
-Default value is "black".
-
-@item interp
-Set interpolation type. Can be @code{bilinear} or @code{nearest}. Default is 
@code{bilinear}.
-@end table
-
-@subsection Commands
-
-This filter supports the all above options as @ref{commands}.
-
-@section showinfo
-
-Show a line containing various information for each input video frame.
-The input video is not modified.
-
-This filter supports the following options:
-
-@table @option
-@item checksum
-Calculate checksums of each plane. By default enabled.
-
-@item udu_sei_as_ascii
-Try to print user data unregistered SEI as ascii character when possible,
-in hex format otherwise.
-@end table
-
-The shown line contains a sequence of key/value pairs of the form
-@var{key}:@var{value}.
-
-The following values are shown in the output:
-
-@table @option
-@item n
-The (sequential) number of the input frame, starting from 0.
-
-@item pts
-The Presentation TimeStamp of the input frame, expressed as a number of
-time base units. The time base unit depends on the filter input pad.
-
-@item pts_time
-The Presentation TimeStamp of the input frame, expressed as a number of
-seconds.
-
-@item fmt
-The pixel format name.
-
-@item sar
-The sample aspect ratio of the input frame, expressed in the form
-@var{num}/@var{den}.
-
-@item s
-The size of the input frame. For the syntax of this option, check the
-@ref{video size syntax,,"Video size" section in the ffmpeg-utils 
manual,ffmpeg-utils}.
-
-@item i
-The type of interlaced mode ("P" for "progressive", "T" for top field first, 
"B"
-for bottom field first).
-
-@item iskey
-This is 1 if the frame is a key frame, 0 otherwise.
-
-@item type
-The picture type of the input frame ("I" for an I-frame, "P" for a
-P-frame, "B" for a B-frame, or "?" for an unknown type).
-Also refer to the documentation of the @code{AVPictureType} enum and of
-the @code{av_get_picture_type_char} function defined in
-@file{libavutil/avutil.h}.
-
-@item checksum
-The Adler-32 checksum (printed in hexadecimal) of all the planes of the input 
frame.
-
-@item plane_checksum
-The Adler-32 checksum (printed in hexadecimal) of each plane of the input 
frame,
-expressed in the form "[@var{c0} @var{c1} @var{c2} @var{c3}]".
-
-@item mean
-The mean value of pixels in each plane of the input frame, expressed in the 
form
-"[@var{mean0} @var{mean1} @var{mean2} @var{mean3}]".
-
-@item stdev
-The standard deviation of pixel values in each plane of the input frame, 
expressed
-in the form "[@var{stdev0} @var{stdev1} @var{stdev2} @var{stdev3}]".
-
-@end table
-
-@section showpalette
-
-Displays the 256 colors palette of each frame. This filter is only relevant for
-@var{pal8} pixel format frames.
-
-It accepts the following option:
-
-@table @option
-@item s
-Set the size of the box used to represent one palette color entry. Default is
-@code{30} (for a @code{30x30} pixel box).
-@end table
-
-@section shuffleframes
-
-Reorder and/or duplicate and/or drop video frames.
-
-It accepts the following parameters:
-
-@table @option
-@item mapping
-Set the destination indexes of input frames.
-This is space or '|' separated list of indexes that maps input frames to output
-frames. Number of indexes also sets maximal value that each index may have.
-'-1' index have special meaning and that is to drop frame.
-@end table
-
-The first frame has the index 0. The default is to keep the input unchanged.
-
-@subsection Examples
-
-@itemize
-@item
-Swap second and third frame of every three frames of the input:
-@example
-ffmpeg -i INPUT -vf "shuffleframes=0 2 1" OUTPUT
-@end example
-
-@item
-Swap 10th and 1st frame of every ten frames of the input:
-@example
-ffmpeg -i INPUT -vf "shuffleframes=9 1 2 3 4 5 6 7 8 0" OUTPUT
-@end example
-@end itemize
-
-@section shufflepixels
-
-Reorder pixels in video frames.
-
-This filter accepts the following options:
-
-@table @option
-@item direction, d
-Set shuffle direction. Can be forward or inverse direction.
-Default direction is forward.
-
-@item mode, m
-Set shuffle mode. Can be horizontal, vertical or block mode.
-
-@item width, w
-@item height, h
-Set shuffle block_size. In case of horizontal shuffle mode only width
-part of size is used, and in case of vertical shuffle mode only height
-part of size is used.
-
-@item seed, s
-Set random seed used with shuffling pixels. Mainly useful to set to be able
-to reverse filtering process to get original input.
-For example, to reverse forward shuffle you need to use same parameters
-and exact same seed and to set direction to inverse.
-@end table
-
-@section shuffleplanes
-
-Reorder and/or duplicate video planes.
-
-It accepts the following parameters:
-
-@table @option
-
-@item map0
-The index of the input plane to be used as the first output plane.
-
-@item map1
-The index of the input plane to be used as the second output plane.
-
-@item map2
-The index of the input plane to be used as the third output plane.
-
-@item map3
-The index of the input plane to be used as the fourth output plane.
-
-@end table
-
-The first plane has the index 0. The default is to keep the input unchanged.
-
-@subsection Examples
-
-@itemize
-@item
-Swap the second and third planes of the input:
-@example
-ffmpeg -i INPUT -vf shuffleplanes=0:2:1:3 OUTPUT
-@end example
-@end itemize
-
-@anchor{signalstats}
-@section signalstats
-Evaluate various visual metrics that assist in determining issues associated
-with the digitization of analog video media.
-
-By default the filter will log these metadata values:
-
-@table @option
-@item YMIN
-Display the minimal Y value contained within the input frame. Expressed in
-range of [0-255].
-
-@item YLOW
-Display the Y value at the 10% percentile within the input frame. Expressed in
-range of [0-255].
-
-@item YAVG
-Display the average Y value within the input frame. Expressed in range of
-[0-255].
-
-@item YHIGH
-Display the Y value at the 90% percentile within the input frame. Expressed in
-range of [0-255].
-
-@item YMAX
-Display the maximum Y value contained within the input frame. Expressed in
-range of [0-255].
-
-@item UMIN
-Display the minimal U value contained within the input frame. Expressed in
-range of [0-255].
+@item UMIN
+Display the minimal U value contained within the input frame. Expressed in
+range of [0-255].
 
 @item ULOW
 Display the U value at the 10% percentile within the input frame. Expressed in
@@ -24417,64 +23876,23 @@ The command above can also be specified as:
 transpose=1:portrait
 @end example
 
-@section transpose_npp
-
-Transpose rows with columns in the input video and optionally flip it.
-For more in depth examples see the @ref{transpose} video filter, which shares 
mostly the same options.
+@section trim
+Trim the input so that the output contains one continuous subpart of the input.
 
 It accepts the following parameters:
-
 @table @option
+@item start
+Specify the time of the start of the kept section, i.e. the frame with the
+timestamp @var{start} will be the first frame in the output.
 
-@item dir
-Specify the transposition direction.
+@item end
+Specify the time of the first frame that will be dropped, i.e. the frame
+immediately preceding the one with the timestamp @var{end} will be the last
+frame in the output.
 
-Can assume the following values:
-@table @samp
-@item cclock_flip
-Rotate by 90 degrees counterclockwise and vertically flip. (default)
-
-@item clock
-Rotate by 90 degrees clockwise.
-
-@item cclock
-Rotate by 90 degrees counterclockwise.
-
-@item clock_flip
-Rotate by 90 degrees clockwise and vertically flip.
-@end table
-
-@item passthrough
-Do not apply the transposition if the input geometry matches the one
-specified by the specified value. It accepts the following values:
-@table @samp
-@item none
-Always apply transposition. (default)
-@item portrait
-Preserve portrait geometry (when @var{height} >= @var{width}).
-@item landscape
-Preserve landscape geometry (when @var{width} >= @var{height}).
-@end table
-
-@end table
-
-@section trim
-Trim the input so that the output contains one continuous subpart of the input.
-
-It accepts the following parameters:
-@table @option
-@item start
-Specify the time of the start of the kept section, i.e. the frame with the
-timestamp @var{start} will be the first frame in the output.
-
-@item end
-Specify the time of the first frame that will be dropped, i.e. the frame
-immediately preceding the one with the timestamp @var{end} will be the last
-frame in the output.
-
-@item start_pts
-This is the same as @var{start}, except this option sets the start timestamp
-in timebase units instead of seconds.
+@item start_pts
+This is the same as @var{start}, except this option sets the start timestamp
+in timebase units instead of seconds.
 
 @item end_pts
 This is the same as @var{end}, except this option sets the end timestamp
@@ -26415,237 +25833,807 @@ ffmpeg -i first.mp4 -i second.mp4 -filter_complex 
xfade=transition=fade:duration
 @end example
 @end itemize
 
-@section xmedian
-Pick median pixels from several input videos.
+@section xmedian
+Pick median pixels from several input videos.
+
+The filter accepts the following options:
+
+@table @option
+@item inputs
+Set number of inputs.
+Default is 3. Allowed range is from 3 to 255.
+If number of inputs is even number, than result will be mean value between two 
median values.
+
+@item planes
+Set which planes to filter. Default value is @code{15}, by which all planes 
are processed.
+
+@item percentile
+Set median percentile. Default value is @code{0.5}.
+Default value of @code{0.5} will pick always median values, while @code{0} 
will pick
+minimum values, and @code{1} maximum values.
+@end table
+
+@subsection Commands
+
+This filter supports all above options as @ref{commands}, excluding option 
@code{inputs}.
+
+@anchor{xpsnr}
+@section xpsnr
+
+Obtain the average (across all input frames) and minimum (across all color 
plane averages)
+eXtended Perceptually weighted peak Signal-to-Noise Ratio (XPSNR) between two 
input videos.
+
+The XPSNR is a low-complexity psychovisually motivated distortion measurement 
algorithm for
+assessing the difference between two video streams or images. This is 
especially useful for
+objectively quantifying the distortions caused by video and image codecs, as 
an alternative
+to a formal subjective test. The logarithmic XPSNR output values are in a 
similar range as
+those of traditional @ref{psnr} assessments but better reflect human 
impressions of visual
+coding quality. More details on the XPSNR measure, which essentially 
represents a blockwise
+weighted variant of the PSNR measure, can be found in the following freely 
available papers:
+
+@itemize
+@item
+C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D. Marpe, and T. Wiegand, 
"XPSNR: A
+Low-Complexity Extension of the Perceptually Weighted Peak Signal-to-Noise 
Ratio for
+High-Resolution Video Quality Assessment," in Proc. IEEE Int. Conf. Acoustics, 
Speech,
+Sig. Process. (ICASSP), virt./online, May 2020. @url{www.ecodis.de/xpsnr.htm}
+
+@item
+C. R. Helmrich, S. Bosse, H. Schwarz, D. Marpe, and T. Wiegand, "A Study of the
+Extended Perceptually Weighted Peak Signal-to-Noise Ratio (XPSNR) for Video 
Compression
+with Different Resolutions and Bit Depths," ITU Journal: ICT Discoveries, vol. 
3, no.
+1, pp. 65 - 72, May 2020. @url{http://handle.itu.int/11.1002/pub/8153d78b-en}
+@end itemize
+
+When publishing the results of XPSNR assessments obtained using, e.g., this 
FFmpeg filter, a
+reference to the above papers as a means of documentation is strongly 
encouraged. The filter
+requires two input videos. The first input is considered a (usually not 
distorted) reference
+source and is passed unchanged to the output, whereas the second input is a 
(distorted) test
+signal. Except for the bit depth, these two video inputs must have the same 
pixel format. In
+addition, for best performance, both compared input videos should be in YCbCr 
color format.
+
+The obtained overall XPSNR values mentioned above are printed through the 
logging system. In
+case of input with multiple color planes, we suggest reporting of the minimum 
XPSNR average.
+
+The following parameter, which behaves like the one for the @ref{psnr} filter, 
is accepted:
+
+@table @option
+@item stats_file, f
+If specified, the filter will use the named file to save the XPSNR value of 
each individual
+frame and color plane. When the file name equals "-", that data is sent to 
standard output.
+@end table
+
+This filter also supports the @ref{framesync} options.
+
+@subsection Examples
+@itemize
+@item
+XPSNR analysis of two 1080p HD videos, ref_source.yuv and test_video.yuv, both 
at 24 frames
+per second, with color format 4:2:0, bit depth 8, and output of a logfile 
named "xpsnr.log":
+@example
+ffmpeg -s 1920x1080 -framerate 24 -pix_fmt yuv420p -i ref_source.yuv -s 
1920x1080 -framerate
+24 -pix_fmt yuv420p -i test_video.yuv -lavfi xpsnr="stats_file=xpsnr.log" -f 
null -
+@end example
+
+@item
+XPSNR analysis of two 2160p UHD videos, ref_source.yuv with bit depth 8 and 
test_video.yuv
+with bit depth 10, both at 60 frames per second with color format 4:2:0, no 
logfile output:
+@example
+ffmpeg -s 3840x2160 -framerate 60 -pix_fmt yuv420p -i ref_source.yuv -s 
3840x2160 -framerate
+60 -pix_fmt yuv420p10le -i test_video.yuv -lavfi xpsnr="stats_file=-" -f null -
+@end example
+@end itemize
+
+@anchor{xstack}
+@section xstack
+Stack video inputs into custom layout.
+
+All streams must be of same pixel format.
+
+The filter accepts the following options:
+
+@table @option
+@item inputs
+Set number of input streams. Default is 2.
+
+@item layout
+Specify layout of inputs.
+This option requires the desired layout configuration to be explicitly set by 
the user.
+This sets position of each video input in output. Each input
+is separated by '|'.
+The first number represents the column, and the second number represents the 
row.
+Numbers start at 0 and are separated by '_'. Optionally one can use wX and hX,
+where X is video input from which to take width or height.
+Multiple values can be used when separated by '+'. In such
+case values are summed together.
+
+Note that if inputs are of different sizes gaps may appear, as not all of
+the output video frame will be filled. Similarly, videos can overlap each
+other if their position doesn't leave enough space for the full frame of
+adjoining videos.
+
+For 2 inputs, a default layout of @code{0_0|w0_0} (equivalent to
+@code{grid=2x1}) is set. In all other cases, a layout or a grid must be set by
+the user. Either @code{grid} or @code{layout} can be specified at a time.
+Specifying both will result in an error.
+
+@item grid
+Specify a fixed size grid of inputs.
+This option is used to create a fixed size grid of the input streams. Set the
+grid size in the form @code{COLUMNSxROWS}. There must be @code{ROWS * COLUMNS}
+input streams and they will be arranged as a grid with @code{ROWS} rows and
+@code{COLUMNS} columns. When using this option, each input stream within a row
+must have the same height and all the rows must have the same width.
+
+If @code{grid} is set, then @code{inputs} option is ignored and is implicitly
+set to @code{ROWS * COLUMNS}.
+
+For 2 inputs, a default grid of @code{2x1} (equivalent to
+@code{layout=0_0|w0_0}) is set. In all other cases, a layout or a grid must be
+set by the user. Either @code{grid} or @code{layout} can be specified at a 
time.
+Specifying both will result in an error.
+
+@item shortest
+If set to 1, force the output to terminate when the shortest input
+terminates. Default value is 0.
+
+@item fill
+If set to valid color, all unused pixels will be filled with that color.
+By default fill is set to none, so it is disabled.
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Display 4 inputs into 2x2 grid.
+
+Layout:
+@example
+input1(0, 0)  | input3(w0, 0)
+input2(0, h0) | input4(w0, h0)
+@end example
+
+@example
+xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0
+@end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+@item
+Display 4 inputs into 1x4 grid.
+
+Layout:
+@example
+input1(0, 0)
+input2(0, h0)
+input3(0, h0+h1)
+input4(0, h0+h1+h2)
+@end example
+
+@example
+xstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2
+@end example
+
+Note that if inputs are of different widths, unused space will appear.
+
+@item
+Display 9 inputs into 3x3 grid.
+
+Layout:
+@example
+input1(0, 0)       | input4(w0, 0)      | input7(w0+w3, 0)
+input2(0, h0)      | input5(w0, h0)     | input8(w0+w3, h0)
+input3(0, h0+h1)   | input6(w0, h0+h1)  | input9(w0+w3, h0+h1)
+@end example
+
+@example
+xstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1
+@end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+@item
+Display 16 inputs into 4x4 grid.
+
+Layout:
+@example
+input1(0, 0)       | input5(w0, 0)       | input9 (w0+w4, 0)       | 
input13(w0+w4+w8, 0)
+input2(0, h0)      | input6(w0, h0)      | input10(w0+w4, h0)      | 
input14(w0+w4+w8, h0)
+input3(0, h0+h1)   | input7(w0, h0+h1)   | input11(w0+w4, h0+h1)   | 
input15(w0+w4+w8, h0+h1)
+input4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| 
input16(w0+w4+w8, h0+h1+h2)
+@end example
+
+@example
+xstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|
+w0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2
+@end example
+
+Note that if inputs are of different sizes, gaps or overlaps may occur.
+
+@end itemize
+
+@anchor{yadif}
+@section yadif
+
+Deinterlace the input video ("yadif" means "yet another deinterlacing
+filter").
+
+It accepts the following parameters:
+
+
+@table @option
+
+@item mode
+The interlacing mode to adopt. It accepts one of the following values:
+
+@table @option
+@item 0, send_frame
+Output one frame for each frame.
+@item 1, send_field
+Output one frame for each field.
+@item 2, send_frame_nospatial
+Like @code{send_frame}, but it skips the spatial interlacing check.
+@item 3, send_field_nospatial
+Like @code{send_field}, but it skips the spatial interlacing check.
+@end table
+
+The default value is @code{send_frame}.
+
+@item parity
+The picture field parity assumed for the input interlaced video. It accepts one
+of the following values:
+
+@table @option
+@item 0, tff
+Assume the top field is first.
+@item 1, bff
+Assume the bottom field is first.
+@item -1, auto
+Enable automatic detection of field parity.
+@end table
+
+The default value is @code{auto}.
+If the interlacing is unknown or the decoder does not export this information,
+top field first will be assumed.
+
+@item deint
+Specify which frames to deinterlace. Accepts one of the following
+values:
+
+@table @option
+@item 0, all
+Deinterlace all frames.
+@item 1, interlaced
+Only deinterlace frames marked as interlaced.
+@end table
+
+The default value is @code{all}.
+@end table
+
+@section yaepblur
+
+Apply blur filter while preserving edges ("yaepblur" means "yet another edge 
preserving blur filter").
+The algorithm is described in
+"J. S. Lee, Digital image enhancement and noise filtering by use of local 
statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
+
+It accepts the following parameters:
+
+@table @option
+@item radius, r
+Set the window radius. Default value is 3.
+
+@item planes, p
+Set which planes to filter. Default is only the first plane.
+
+@item sigma, s
+Set blur strength. Default value is 128.
+@end table
+
+@subsection Commands
+This filter supports same @ref{commands} as options.
+
+@section zoompan
+
+Apply Zoom & Pan effect.
+
+This filter accepts the following options:
+
+@table @option
+@item zoom, z
+Set the zoom expression. Range is 1-10. Default is 1.
+
+@item x
+@item y
+Set the x and y expression. Default is 0.
+
+@item d
+Set the duration expression in number of frames.
+This sets for how many number of frames effect will last for
+single input image. Default is 90.
+
+@item s
+Set the output image size, default is 'hd720'.
+
+@item fps
+Set the output frame rate, default is '25'.
+@end table
+
+Each expression can contain the following constants:
+
+@table @option
+@item in_w, iw
+Input width.
+
+@item in_h, ih
+Input height.
+
+@item out_w, ow
+Output width.
+
+@item out_h, oh
+Output height.
+
+@item in
+Input frame count.
+
+@item on
+Output frame count.
+
+@item in_time, it
+The input timestamp expressed in seconds. It's NAN if the input timestamp is 
unknown.
+
+@item out_time, time, ot
+The output timestamp expressed in seconds.
+
+@item x
+@item y
+Last calculated 'x' and 'y' position from 'x' and 'y' expression
+for current input frame.
+
+@item px
+@item py
+'x' and 'y' of last output frame of previous input frame or 0 when there was
+not yet such frame (first input frame).
+
+@item zoom
+Last calculated zoom from 'z' expression for current input frame.
+
+@item pzoom
+Last calculated zoom of last output frame of previous input frame.
+
+@item duration
+Number of output frames for current input frame. Calculated from 'd' expression
+for each input frame.
+
+@item pduration
+number of output frames created for previous input frame
+
+@item a
+Rational number: input width / input height
+
+@item sar
+sample aspect ratio
+
+@item dar
+display aspect ratio
+
+@end table
+
+@subsection Examples
+
+@itemize
+@item
+Zoom in up to 1.5x and pan at same time to some spot near center of picture:
+@example
+zoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360
+@end example
+
+@item
+Zoom in up to 1.5x and pan always at center of picture:
+@example
+zoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+@end example
+
+@item
+Same as above but without pausing:
+@example
+zoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+@end example
+
+@item
+Zoom in 2x into center of picture only for the first second of the input video:
+@example
+zoompan=z='if(between(in_time,0,1),2,1)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+@end example
+
+@end itemize
+
+@anchor{zscale}
+@section zscale
+Scale (resize) the input video, using the z.lib library:
+@url{https://github.com/sekrit-twc/zimg}. To enable compilation of this
+filter, you need to configure FFmpeg with @code{--enable-libzimg}.
+
+The zscale filter forces the output display aspect ratio to be the same
+as the input, by changing the output sample aspect ratio.
+
+If the input image format is different from the format requested by
+the next filter, the zscale filter will convert the input to the
+requested format.
+
+@subsection Options
+The filter accepts the following options.
+
+@table @option
+@item width, w
+@item height, h
+Set the output video dimension expression. Default value is the input
+dimension.
+
+If the @var{width} or @var{w} value is 0, the input width is used for
+the output. If the @var{height} or @var{h} value is 0, the input height
+is used for the output.
+
+If one and only one of the values is -n with n >= 1, the zscale filter
+will use a value that maintains the aspect ratio of the input image,
+calculated from the other specified dimension. After that it will,
+however, make sure that the calculated dimension is divisible by n and
+adjust the value if necessary.
+
+If both values are -n with n >= 1, the behavior will be identical to
+both values being set to 0 as previously detailed.
+
+See below for the list of accepted constants for use in the dimension
+expression.
+
+@item size, s
+Set the video size. For the syntax of this option, check the
+@ref{video size syntax,,"Video size" section in the ffmpeg-utils 
manual,ffmpeg-utils}.
+
+@item dither, d
+Set the dither type.
+
+Possible values are:
+@table @var
+@item none
+@item ordered
+@item random
+@item error_diffusion
+@end table
+
+Default is none.
+
+@item filter, f
+Set the resize filter type.
+
+Possible values are:
+@table @var
+@item point
+@item bilinear
+@item bicubic
+@item spline16
+@item spline36
+@item lanczos
+@end table
+
+Default is bilinear.
+
+@item range, r
+Set the color range.
+
+Possible values are:
+@table @var
+@item input
+@item limited
+@item full
+@end table
+
+Default is same as input.
+
+@item primaries, p
+Set the color primaries.
+
+Possible values are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 170m
+@item 240m
+@item 2020
+@end table
+
+Default is same as input.
+
+@item transfer, t
+Set the transfer characteristics.
+
+Possible values are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 601
+@item linear
+@item 2020_10
+@item 2020_12
+@item smpte2084
+@item iec61966-2-1
+@item arib-std-b67
+@end table
+
+Default is same as input.
+
+@item matrix, m
+Set the colorspace matrix.
+
+Possible value are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 470bg
+@item 170m
+@item 2020_ncl
+@item 2020_cl
+@end table
+
+Default is same as input.
+
+@item rangein, rin
+Set the input color range.
+
+Possible values are:
+@table @var
+@item input
+@item limited
+@item full
+@end table
+
+Default is same as input.
+
+@item primariesin, pin
+Set the input color primaries.
+
+Possible values are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 170m
+@item 240m
+@item 2020
+@end table
+
+Default is same as input.
 
-The filter accepts the following options:
+@item transferin, tin
+Set the input transfer characteristics.
 
-@table @option
-@item inputs
-Set number of inputs.
-Default is 3. Allowed range is from 3 to 255.
-If number of inputs is even number, than result will be mean value between two 
median values.
+Possible values are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 601
+@item linear
+@item 2020_10
+@item 2020_12
+@end table
 
-@item planes
-Set which planes to filter. Default value is @code{15}, by which all planes 
are processed.
+Default is same as input.
 
-@item percentile
-Set median percentile. Default value is @code{0.5}.
-Default value of @code{0.5} will pick always median values, while @code{0} 
will pick
-minimum values, and @code{1} maximum values.
+@item matrixin, min
+Set the input colorspace matrix.
+
+Possible value are:
+@table @var
+@item input
+@item 709
+@item unspecified
+@item 470bg
+@item 170m
+@item 2020_ncl
+@item 2020_cl
 @end table
 
-@subsection Commands
+@item chromal, c
+Set the output chroma location.
 
-This filter supports all above options as @ref{commands}, excluding option 
@code{inputs}.
+Possible values are:
+@table @var
+@item input
+@item left
+@item center
+@item topleft
+@item top
+@item bottomleft
+@item bottom
+@end table
 
-@anchor{xpsnr}
-@section xpsnr
+@item chromalin, cin
+Set the input chroma location.
 
-Obtain the average (across all input frames) and minimum (across all color 
plane averages)
-eXtended Perceptually weighted peak Signal-to-Noise Ratio (XPSNR) between two 
input videos.
+Possible values are:
+@table @var
+@item input
+@item left
+@item center
+@item topleft
+@item top
+@item bottomleft
+@item bottom
+@end table
 
-The XPSNR is a low-complexity psychovisually motivated distortion measurement 
algorithm for
-assessing the difference between two video streams or images. This is 
especially useful for
-objectively quantifying the distortions caused by video and image codecs, as 
an alternative
-to a formal subjective test. The logarithmic XPSNR output values are in a 
similar range as
-those of traditional @ref{psnr} assessments but better reflect human 
impressions of visual
-coding quality. More details on the XPSNR measure, which essentially 
represents a blockwise
-weighted variant of the PSNR measure, can be found in the following freely 
available papers:
+@item npl
+Set the nominal peak luminance.
 
-@itemize
-@item
-C. R. Helmrich, M. Siekmann, S. Becker, S. Bosse, D. Marpe, and T. Wiegand, 
"XPSNR: A
-Low-Complexity Extension of the Perceptually Weighted Peak Signal-to-Noise 
Ratio for
-High-Resolution Video Quality Assessment," in Proc. IEEE Int. Conf. Acoustics, 
Speech,
-Sig. Process. (ICASSP), virt./online, May 2020. @url{www.ecodis.de/xpsnr.htm}
+@item param_a
+Parameter A for scaling filters. Parameter "b" for bicubic, and the number of
+filter taps for lanczos.
 
-@item
-C. R. Helmrich, S. Bosse, H. Schwarz, D. Marpe, and T. Wiegand, "A Study of the
-Extended Perceptually Weighted Peak Signal-to-Noise Ratio (XPSNR) for Video 
Compression
-with Different Resolutions and Bit Depths," ITU Journal: ICT Discoveries, vol. 
3, no.
-1, pp. 65 - 72, May 2020. @url{http://handle.itu.int/11.1002/pub/8153d78b-en}
-@end itemize
+@item param_b
+Parameter B for scaling filters. Parameter "c" for bicubic.
+@end table
 
-When publishing the results of XPSNR assessments obtained using, e.g., this 
FFmpeg filter, a
-reference to the above papers as a means of documentation is strongly 
encouraged. The filter
-requires two input videos. The first input is considered a (usually not 
distorted) reference
-source and is passed unchanged to the output, whereas the second input is a 
(distorted) test
-signal. Except for the bit depth, these two video inputs must have the same 
pixel format. In
-addition, for best performance, both compared input videos should be in YCbCr 
color format.
+The values of the @option{w} and @option{h} options are expressions
+containing the following constants:
 
-The obtained overall XPSNR values mentioned above are printed through the 
logging system. In
-case of input with multiple color planes, we suggest reporting of the minimum 
XPSNR average.
+@table @var
+@item in_w
+@item in_h
+The input width and height
 
-The following parameter, which behaves like the one for the @ref{psnr} filter, 
is accepted:
+@item iw
+@item ih
+These are the same as @var{in_w} and @var{in_h}.
 
-@table @option
-@item stats_file, f
-If specified, the filter will use the named file to save the XPSNR value of 
each individual
-frame and color plane. When the file name equals "-", that data is sent to 
standard output.
-@end table
+@item out_w
+@item out_h
+The output (scaled) width and height
 
-This filter also supports the @ref{framesync} options.
+@item ow
+@item oh
+These are the same as @var{out_w} and @var{out_h}
 
-@subsection Examples
-@itemize
-@item
-XPSNR analysis of two 1080p HD videos, ref_source.yuv and test_video.yuv, both 
at 24 frames
-per second, with color format 4:2:0, bit depth 8, and output of a logfile 
named "xpsnr.log":
-@example
-ffmpeg -s 1920x1080 -framerate 24 -pix_fmt yuv420p -i ref_source.yuv -s 
1920x1080 -framerate
-24 -pix_fmt yuv420p -i test_video.yuv -lavfi xpsnr="stats_file=xpsnr.log" -f 
null -
-@end example
+@item a
+The same as @var{iw} / @var{ih}
 
-@item
-XPSNR analysis of two 2160p UHD videos, ref_source.yuv with bit depth 8 and 
test_video.yuv
-with bit depth 10, both at 60 frames per second with color format 4:2:0, no 
logfile output:
-@example
-ffmpeg -s 3840x2160 -framerate 60 -pix_fmt yuv420p -i ref_source.yuv -s 
3840x2160 -framerate
-60 -pix_fmt yuv420p10le -i test_video.yuv -lavfi xpsnr="stats_file=-" -f null -
-@end example
-@end itemize
+@item sar
+input sample aspect ratio
 
-@anchor{xstack}
-@section xstack
-Stack video inputs into custom layout.
+@item dar
+The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
 
-All streams must be of same pixel format.
+@item hsub
+@item vsub
+horizontal and vertical input chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
 
-The filter accepts the following options:
+@item ohsub
+@item ovsub
+horizontal and vertical output chroma subsample values. For example for the
+pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@end table
+
+@subsection Commands
 
+This filter supports the following commands:
 @table @option
-@item inputs
-Set number of input streams. Default is 2.
+@item width, w
+@item height, h
+Set the output video dimension expression.
+The command accepts the same syntax of the corresponding option.
 
-@item layout
-Specify layout of inputs.
-This option requires the desired layout configuration to be explicitly set by 
the user.
-This sets position of each video input in output. Each input
-is separated by '|'.
-The first number represents the column, and the second number represents the 
row.
-Numbers start at 0 and are separated by '_'. Optionally one can use wX and hX,
-where X is video input from which to take width or height.
-Multiple values can be used when separated by '+'. In such
-case values are summed together.
+If the specified expression is not valid, it is kept at its current
+value.
+@end table
 
-Note that if inputs are of different sizes gaps may appear, as not all of
-the output video frame will be filled. Similarly, videos can overlap each
-other if their position doesn't leave enough space for the full frame of
-adjoining videos.
+@c man end VIDEO FILTERS
 
-For 2 inputs, a default layout of @code{0_0|w0_0} (equivalent to
-@code{grid=2x1}) is set. In all other cases, a layout or a grid must be set by
-the user. Either @code{grid} or @code{layout} can be specified at a time.
-Specifying both will result in an error.
+@chapter CUDA Video Filters
+@c man begin CUDA Video Filters
 
-@item grid
-Specify a fixed size grid of inputs.
-This option is used to create a fixed size grid of the input streams. Set the
-grid size in the form @code{COLUMNSxROWS}. There must be @code{ROWS * COLUMNS}
-input streams and they will be arranged as a grid with @code{ROWS} rows and
-@code{COLUMNS} columns. When using this option, each input stream within a row
-must have the same height and all the rows must have the same width.
+To enable CUDA and/or NPP filters please refer to configuration guidelines for 
@ref{CUDA} and for @ref{CUDA NPP} filters.
 
-If @code{grid} is set, then @code{inputs} option is ignored and is implicitly
-set to @code{ROWS * COLUMNS}.
+Running CUDA filters requires you to initialize a hardware device and to pass 
that device to all filters in any filter graph.
+@table @option
 
-For 2 inputs, a default grid of @code{2x1} (equivalent to
-@code{layout=0_0|w0_0}) is set. In all other cases, a layout or a grid must be
-set by the user. Either @code{grid} or @code{layout} can be specified at a 
time.
-Specifying both will result in an error.
+@item -init_hw_device cuda[=@var{name}][:@var{device}[,@var{key=value}...]]
+Initialise a new hardware device of type @var{cuda} called @var{name}, using 
the
+given device parameters.
 
-@item shortest
-If set to 1, force the output to terminate when the shortest input
-terminates. Default value is 0.
+@item -filter_hw_device @var{name}
+Pass the hardware device called @var{name} to all filters in any filter graph.
 
-@item fill
-If set to valid color, all unused pixels will be filled with that color.
-By default fill is set to none, so it is disabled.
 @end table
 
-@subsection Examples
+For more detailed information see 
@url{https://www.ffmpeg.org/ffmpeg.html#Advanced-Video-options}
 
 @itemize
 @item
-Display 4 inputs into 2x2 grid.
-
-Layout:
+Example of initializing second CUDA device on the system and running 
scale_cuda and bilateral_cuda filters.
 @example
-input1(0, 0)  | input3(w0, 0)
-input2(0, h0) | input4(w0, h0)
+./ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input.mp4 
-init_hw_device cuda:1 -filter_complex \
+"[0:v]scale_cuda=format=yuv444p[scaled_video];[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0"
 \
+-an -sn -c:v h264_nvenc -cq 20 out.mp4
 @end example
+@end itemize
 
-@example
-xstack=inputs=4:layout=0_0|0_h0|w0_0|w0_h0
-@end example
+Since CUDA filters operate exclusively on GPU memory, frame data must 
sometimes be uploaded (@ref{hwupload}) to hardware surfaces associated with the 
appropriate CUDA device before processing, and downloaded (@ref{hwdownload}) 
back to normal memory afterward, if required. Whether @ref{hwupload} or 
@ref{hwdownload} is necessary depends on the specific workflow:
 
-Note that if inputs are of different sizes, gaps or overlaps may occur.
+@itemize
+@item If the input frames are already in GPU memory (e.g., when using 
@code{-hwaccel cuda} or @code{-hwaccel_output_format cuda}), explicit use of 
@ref{hwupload} is not needed, as the data is already in the appropriate memory 
space.
+@item If the input frames are in CPU memory (e.g., software-decoded frames or 
frames processed by CPU-based filters), it is necessary to use @ref{hwupload} 
to transfer the data to GPU memory for CUDA processing.
+@item If the output of the CUDA filters needs to be further processed by 
software-based filters or saved in a format not supported by GPU-based 
encoders, @ref{hwdownload} is required to transfer the data back to CPU memory.
+@end itemize
+Note that @ref{hwupload} uploads data to a surface with the same layout as the 
software frame, so it may be necessary to add a @ref{format} filter immediately 
before @ref{hwupload} to ensure the input is in the correct format. Similarly, 
@ref{hwdownload} may not support all output formats, so an additional 
@ref{format} filter may need to be inserted immediately after @ref{hwdownload} 
in the filter graph to ensure compatibility.
 
-@item
-Display 4 inputs into 1x4 grid.
+@anchor{CUDA}
+@section CUDA
+Below is a description of the currently available Nvidia CUDA video filters.
 
-Layout:
-@example
-input1(0, 0)
-input2(0, h0)
-input3(0, h0+h1)
-input4(0, h0+h1+h2)
-@end example
+Prerequisites:
+@itemize
+@item Install Nvidia CUDA Toolkit
+@end itemize
 
-@example
-xstack=inputs=4:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2
-@end example
+Note: If FFmpeg detects the Nvidia CUDA Toolkit during configuration, it will 
enable CUDA filters automatically without requiring any additional flags. If 
you want to explicitly enable them, use the following options:
 
-Note that if inputs are of different widths, unused space will appear.
+@itemize
+@item Configure FFmpeg with @code{--enable-cuda-nvcc --enable-nonfree}.
+@item Configure FFmpeg with @code{--enable-cuda-llvm}. Additional requirement: 
@code{llvm} lib must be installed.
+@end itemize
 
-@item
-Display 9 inputs into 3x3 grid.
+@subsection bilateral_cuda
+CUDA accelerated bilateral filter, an edge preserving filter.
+This filter is mathematically accurate thanks to the use of GPU acceleration.
+For best output quality, use one to one chroma subsampling, i.e. yuv444p 
format.
 
-Layout:
-@example
-input1(0, 0)       | input4(w0, 0)      | input7(w0+w3, 0)
-input2(0, h0)      | input5(w0, h0)     | input8(w0+w3, h0)
-input3(0, h0+h1)   | input6(w0, h0+h1)  | input9(w0+w3, h0+h1)
-@end example
+The filter accepts the following options:
+@table @option
+@item sigmaS
+Set sigma of gaussian function to calculate spatial weight, also called sigma 
space.
+Allowed range is 0.1 to 512. Default is 0.1.
 
-@example
-xstack=inputs=9:layout=0_0|0_h0|0_h0+h1|w0_0|w0_h0|w0_h0+h1|w0+w3_0|w0+w3_h0|w0+w3_h0+h1
-@end example
+@item sigmaR
+Set sigma of gaussian function to calculate color range weight, also called 
sigma color.
+Allowed range is 0.1 to 512. Default is 0.1.
 
-Note that if inputs are of different sizes, gaps or overlaps may occur.
+@item window_size
+Set window size of the bilateral function to determine the number of 
neighbours to loop on.
+If the number entered is even, one will be added automatically.
+Allowed range is 1 to 255. Default is 1.
+@end table
+@subsubsection Examples
 
+@itemize
 @item
-Display 16 inputs into 4x4 grid.
-
-Layout:
-@example
-input1(0, 0)       | input5(w0, 0)       | input9 (w0+w4, 0)       | 
input13(w0+w4+w8, 0)
-input2(0, h0)      | input6(w0, h0)      | input10(w0+w4, h0)      | 
input14(w0+w4+w8, h0)
-input3(0, h0+h1)   | input7(w0, h0+h1)   | input11(w0+w4, h0+h1)   | 
input15(w0+w4+w8, h0+h1)
-input4(0, h0+h1+h2)| input8(w0, h0+h1+h2)| input12(w0+w4, h0+h1+h2)| 
input16(w0+w4+w8, h0+h1+h2)
-@end example
+Apply the bilateral filter on a video.
 
 @example
-xstack=inputs=16:layout=0_0|0_h0|0_h0+h1|0_h0+h1+h2|w0_0|w0_h0|w0_h0+h1|w0_h0+h1+h2|w0+w4_0|
-w0+w4_h0|w0+w4_h0+h1|w0+w4_h0+h1+h2|w0+w4+w8_0|w0+w4+w8_h0|w0+w4+w8_h0+h1|w0+w4+w8_h0+h1+h2
+./ffmpeg -v verbose \
+-hwaccel cuda -hwaccel_output_format cuda -i input.mp4  \
+-init_hw_device cuda \
+-filter_complex \
+" \
+[0:v]scale_cuda=format=yuv444p[scaled_video];
+[scaled_video]bilateral_cuda=window_size=9:sigmaS=3.0:sigmaR=50.0" \
+-an -sn -c:v h264_nvenc -cq 20 out.mp4
 @end example
 
-Note that if inputs are of different sizes, gaps or overlaps may occur.
-
 @end itemize
 
-@anchor{yadif}
-@section yadif
+@subsection bwdif_cuda
 
-Deinterlace the input video ("yadif" means "yet another deinterlacing
-filter").
+Deinterlace the input video using the @ref{bwdif} algorithm, but implemented
+in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
+and/or nvenc.
 
 It accepts the following parameters:
 
-
 @table @option
-
 @item mode
 The interlacing mode to adopt. It accepts one of the following values:
 
@@ -26654,13 +26642,9 @@ The interlacing mode to adopt. It accepts one of the 
following values:
 Output one frame for each frame.
 @item 1, send_field
 Output one frame for each field.
-@item 2, send_frame_nospatial
-Like @code{send_frame}, but it skips the spatial interlacing check.
-@item 3, send_field_nospatial
-Like @code{send_field}, but it skips the spatial interlacing check.
 @end table
 
-The default value is @code{send_frame}.
+The default value is @code{send_field}.
 
 @item parity
 The picture field parity assumed for the input interlaced video. It accepts one
@@ -26693,428 +26677,413 @@ Only deinterlace frames marked as interlaced.
 The default value is @code{all}.
 @end table
 
-@section yadif_cuda
+@subsection chromakey_cuda
+CUDA accelerated YUV colorspace color/chroma keying.
 
-Deinterlace the input video using the @ref{yadif} algorithm, but implemented
-in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
-and/or nvenc.
+This filter works like normal chromakey filter but operates on CUDA frames.
+for more details and parameters see @ref{chromakey}.
 
-It accepts the following parameters:
+@subsubsection Examples
 
+@itemize
+@item
+Make all the green pixels in the input video transparent and use it as an 
overlay for another video:
 
-@table @option
+@example
+./ffmpeg \
+    -hwaccel cuda -hwaccel_output_format cuda -i input_green.mp4  \
+    -hwaccel cuda -hwaccel_output_format cuda -i base_video.mp4 \
+    -init_hw_device cuda \
+    -filter_complex \
+    " \
+        [0:v]chromakey_cuda=0x25302D:0.1:0.12:1[overlay_video]; \
+        [1:v]scale_cuda=format=yuv420p[base]; \
+        [base][overlay_video]overlay_cuda" \
+    -an -sn -c:v h264_nvenc -cq 20 output.mp4
+@end example
 
-@item mode
-The interlacing mode to adopt. It accepts one of the following values:
+@item
+Process two software sources, explicitly uploading the frames:
 
-@table @option
-@item 0, send_frame
-Output one frame for each frame.
-@item 1, send_field
-Output one frame for each field.
-@item 2, send_frame_nospatial
-Like @code{send_frame}, but it skips the spatial interlacing check.
-@item 3, send_field_nospatial
-Like @code{send_field}, but it skips the spatial interlacing check.
-@end table
+@example
+./ffmpeg -init_hw_device cuda=cuda -filter_hw_device cuda \
+    -f lavfi -i color=size=800x600:color=white,format=yuv420p \
+    -f lavfi -i yuvtestsrc=size=200x200,format=yuv420p \
+    -filter_complex \
+    " \
+        [0]hwupload[under]; \
+        [1]hwupload,chromakey_cuda=green:0.1:0.12[over]; \
+        [under][over]overlay_cuda" \
+    -c:v hevc_nvenc -cq 18 -preset slow output.mp4
+@end example
 
-The default value is @code{send_frame}.
+@end itemize
 
-@item parity
-The picture field parity assumed for the input interlaced video. It accepts one
-of the following values:
+@subsection colorspace_cuda
 
-@table @option
-@item 0, tff
-Assume the top field is first.
-@item 1, bff
-Assume the bottom field is first.
-@item -1, auto
-Enable automatic detection of field parity.
-@end table
+CUDA accelerated implementation of the colorspace filter.
 
-The default value is @code{auto}.
-If the interlacing is unknown or the decoder does not export this information,
-top field first will be assumed.
+It is by no means feature complete compared to the software colorspace filter,
+and at the current time only supports color range conversion between jpeg/full
+and mpeg/limited range.
 
-@item deint
-Specify which frames to deinterlace. Accepts one of the following
-values:
+The filter accepts the following options:
 
 @table @option
-@item 0, all
-Deinterlace all frames.
-@item 1, interlaced
-Only deinterlace frames marked as interlaced.
+@item range
+Specify output color range.
+
+The accepted values are:
+@table @samp
+@item tv
+TV (restricted) range
+
+@item mpeg
+MPEG (restricted) range
+
+@item pc
+PC (full) range
+
+@item jpeg
+JPEG (full) range
+
 @end table
 
-The default value is @code{all}.
 @end table
 
-@section yaepblur
+@anchor{overlay_cuda}
+@subsection overlay_cuda
 
-Apply blur filter while preserving edges ("yaepblur" means "yet another edge 
preserving blur filter").
-The algorithm is described in
-"J. S. Lee, Digital image enhancement and noise filtering by use of local 
statistics, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-2, 1980."
+Overlay one video on top of another.
+
+This is the CUDA variant of the @ref{overlay} filter.
+It only accepts CUDA frames. The underlying input pixel formats have to match.
+
+It takes two inputs and has one output. The first input is the "main"
+video on which the second input is overlaid.
 
 It accepts the following parameters:
 
 @table @option
-@item radius, r
-Set the window radius. Default value is 3.
+@item x
+@item y
+Set expressions for the x and y coordinates of the overlaid video
+on the main video.
 
-@item planes, p
-Set which planes to filter. Default is only the first plane.
+They can contain the following parameters:
 
-@item sigma, s
-Set blur strength. Default value is 128.
-@end table
+@table @option
 
-@subsection Commands
-This filter supports same @ref{commands} as options.
+@item main_w, W
+@item main_h, H
+The main input width and height.
 
-@section zoompan
+@item overlay_w, w
+@item overlay_h, h
+The overlay input width and height.
 
-Apply Zoom & Pan effect.
+@item x
+@item y
+The computed values for @var{x} and @var{y}. They are evaluated for
+each new frame.
 
-This filter accepts the following options:
+@item n
+The ordinal index of the main input frame, starting from 0.
+
+@item pos
+The byte offset position in the file of the main input frame, NAN if unknown.
+Deprecated, do not use.
+
+@item t
+The timestamp of the main input frame, expressed in seconds, NAN if unknown.
+
+@end table
+
+Default value is "0" for both expressions.
 
+@item eval
+Set when the expressions for @option{x} and @option{y} are evaluated.
+
+It accepts the following values:
 @table @option
-@item zoom, z
-Set the zoom expression. Range is 1-10. Default is 1.
+@item init
+Evaluate expressions once during filter initialization or
+when a command is processed.
 
-@item x
-@item y
-Set the x and y expression. Default is 0.
+@item frame
+Evaluate expressions for each incoming frame
+@end table
 
-@item d
-Set the duration expression in number of frames.
-This sets for how many number of frames effect will last for
-single input image. Default is 90.
+Default value is @option{frame}.
 
-@item s
-Set the output image size, default is 'hd720'.
+@item eof_action
+See @ref{framesync}.
+
+@item shortest
+See @ref{framesync}.
+
+@item repeatlast
+See @ref{framesync}.
 
-@item fps
-Set the output frame rate, default is '25'.
 @end table
 
-Each expression can contain the following constants:
+This filter also supports the @ref{framesync} options.
+
+@anchor{scale_cuda}
+@subsection scale_cuda
 
+Scale (resize) and convert (pixel format) the input video, using accelerated 
CUDA kernels.
+Setting the output width and height works in the same way as for the 
@ref{scale} filter.
+
+The filter accepts the following options:
 @table @option
-@item in_w, iw
-Input width.
+@item w
+@item h
+Set the output video dimension expression. Default value is the input 
dimension.
 
-@item in_h, ih
-Input height.
+Allows for the same expressions as the @ref{scale} filter.
 
-@item out_w, ow
-Output width.
+@item interp_algo
+Sets the algorithm used for scaling:
 
-@item out_h, oh
-Output height.
+@table @var
+@item nearest
+Nearest neighbour
 
-@item in
-Input frame count.
+Used by default if input parameters match the desired output.
 
-@item on
-Output frame count.
+@item bilinear
+Bilinear
 
-@item in_time, it
-The input timestamp expressed in seconds. It's NAN if the input timestamp is 
unknown.
+@item bicubic
+Bicubic
 
-@item out_time, time, ot
-The output timestamp expressed in seconds.
+This is the default.
 
-@item x
-@item y
-Last calculated 'x' and 'y' position from 'x' and 'y' expression
-for current input frame.
+@item lanczos
+Lanczos
 
-@item px
-@item py
-'x' and 'y' of last output frame of previous input frame or 0 when there was
-not yet such frame (first input frame).
+@end table
 
-@item zoom
-Last calculated zoom from 'z' expression for current input frame.
+@item format
+Controls the output pixel format. By default, or if none is specified, the 
input
+pixel format is used.
 
-@item pzoom
-Last calculated zoom of last output frame of previous input frame.
+The filter does not support converting between YUV and RGB pixel formats.
 
-@item duration
-Number of output frames for current input frame. Calculated from 'd' expression
-for each input frame.
+@item passthrough
+If set to 0, every frame is processed, even if no conversion is necessary.
+This mode can be useful to use the filter as a buffer for a downstream
+frame-consumer that exhausts the limited decoder frame pool.
 
-@item pduration
-number of output frames created for previous input frame
+If set to 1, frames are passed through as-is if they match the desired output
+parameters. This is the default behaviour.
 
-@item a
-Rational number: input width / input height
+@item param
+Algorithm-Specific parameter.
 
-@item sar
-sample aspect ratio
+Affects the curves of the bicubic algorithm.
 
-@item dar
-display aspect ratio
+@item force_original_aspect_ratio
+@item force_divisible_by
+Work the same as the identical @ref{scale} filter options.
+
+@item reset_sar
+Works the same as the identical @ref{scale} filter option.
 
 @end table
 
-@subsection Examples
+@subsubsection Examples
 
 @itemize
 @item
-Zoom in up to 1.5x and pan at same time to some spot near center of picture:
-@example
-zoompan=z='min(zoom+0.0015,1.5)':d=700:x='if(gte(zoom,1.5),x,x+1/a)':y='if(gte(zoom,1.5),y,y+1)':s=640x360
-@end example
-
-@item
-Zoom in up to 1.5x and pan always at center of picture:
+Scale input to 720p, keeping aspect ratio and ensuring the output is yuv420p.
 @example
-zoompan=z='min(zoom+0.0015,1.5)':d=700:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=-2:720:format=yuv420p
 @end example
 
 @item
-Same as above but without pausing:
+Upscale to 4K using nearest neighbour algorithm.
 @example
-zoompan=z='min(max(zoom,pzoom)+0.0015,1.5)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=4096:2160:interp_algo=nearest
 @end example
 
 @item
-Zoom in 2x into center of picture only for the first second of the input video:
+Don't do any conversion or scaling, but copy all input frames into newly 
allocated ones.
+This can be useful to deal with a filter and encode chain that otherwise 
exhausts the
+decoders frame pool.
 @example
-zoompan=z='if(between(in_time,0,1),2,1)':d=1:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)'
+scale_cuda=passthrough=0
 @end example
-
 @end itemize
 
-@anchor{zscale}
-@section zscale
-Scale (resize) the input video, using the z.lib library:
-@url{https://github.com/sekrit-twc/zimg}. To enable compilation of this
-filter, you need to configure FFmpeg with @code{--enable-libzimg}.
+@subsection yadif_cuda
 
-The zscale filter forces the output display aspect ratio to be the same
-as the input, by changing the output sample aspect ratio.
+Deinterlace the input video using the @ref{yadif} algorithm, but implemented
+in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec
+and/or nvenc.
 
-If the input image format is different from the format requested by
-the next filter, the zscale filter will convert the input to the
-requested format.
+It accepts the following parameters:
 
-@subsection Options
-The filter accepts the following options.
 
 @table @option
-@item width, w
-@item height, h
-Set the output video dimension expression. Default value is the input
-dimension.
-
-If the @var{width} or @var{w} value is 0, the input width is used for
-the output. If the @var{height} or @var{h} value is 0, the input height
-is used for the output.
-
-If one and only one of the values is -n with n >= 1, the zscale filter
-will use a value that maintains the aspect ratio of the input image,
-calculated from the other specified dimension. After that it will,
-however, make sure that the calculated dimension is divisible by n and
-adjust the value if necessary.
 
-If both values are -n with n >= 1, the behavior will be identical to
-both values being set to 0 as previously detailed.
+@item mode
+The interlacing mode to adopt. It accepts one of the following values:
 
-See below for the list of accepted constants for use in the dimension
-expression.
+@table @option
+@item 0, send_frame
+Output one frame for each frame.
+@item 1, send_field
+Output one frame for each field.
+@item 2, send_frame_nospatial
+Like @code{send_frame}, but it skips the spatial interlacing check.
+@item 3, send_field_nospatial
+Like @code{send_field}, but it skips the spatial interlacing check.
+@end table
 
-@item size, s
-Set the video size. For the syntax of this option, check the
-@ref{video size syntax,,"Video size" section in the ffmpeg-utils 
manual,ffmpeg-utils}.
+The default value is @code{send_frame}.
 
-@item dither, d
-Set the dither type.
+@item parity
+The picture field parity assumed for the input interlaced video. It accepts one
+of the following values:
 
-Possible values are:
-@table @var
-@item none
-@item ordered
-@item random
-@item error_diffusion
+@table @option
+@item 0, tff
+Assume the top field is first.
+@item 1, bff
+Assume the bottom field is first.
+@item -1, auto
+Enable automatic detection of field parity.
 @end table
 
-Default is none.
+The default value is @code{auto}.
+If the interlacing is unknown or the decoder does not export this information,
+top field first will be assumed.
 
-@item filter, f
-Set the resize filter type.
+@item deint
+Specify which frames to deinterlace. Accepts one of the following
+values:
 
-Possible values are:
-@table @var
-@item point
-@item bilinear
-@item bicubic
-@item spline16
-@item spline36
-@item lanczos
+@table @option
+@item 0, all
+Deinterlace all frames.
+@item 1, interlaced
+Only deinterlace frames marked as interlaced.
 @end table
 
-Default is bilinear.
+The default value is @code{all}.
+@end table
 
-@item range, r
-Set the color range.
+@anchor{CUDA NPP}
+@section CUDA NPP
+Below is a description of the currently available NVIDIA Performance 
Primitives (libnpp) video filters.
 
-Possible values are:
-@table @var
-@item input
-@item limited
-@item full
-@end table
+Prerequisites:
+@itemize
+@item Install Nvidia CUDA Toolkit
+@item Install libnpp
+@end itemize
 
-Default is same as input.
+To enable CUDA NPP filters:
 
-@item primaries, p
-Set the color primaries.
+@itemize
+@item Configure FFmpeg with @code{--enable-nonfree --enable-libnpp}.
+@end itemize
 
-Possible values are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 170m
-@item 240m
-@item 2020
-@end table
 
-Default is same as input.
+@anchor{scale_npp}
+@subsection scale_npp
 
-@item transfer, t
-Set the transfer characteristics.
+Use the NVIDIA Performance Primitives (libnpp) to perform scaling and/or pixel
+format conversion on CUDA video frames. Setting the output width and height
+works in the same way as for the @var{scale} filter.
 
-Possible values are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 601
-@item linear
-@item 2020_10
-@item 2020_12
-@item smpte2084
-@item iec61966-2-1
-@item arib-std-b67
-@end table
+The following additional options are accepted:
+@table @option
+@item format
+The pixel format of the output CUDA frames. If set to the string "same" (the
+default), the input format will be kept. Note that automatic format negotiation
+and conversion is not yet supported for hardware frames
 
-Default is same as input.
+@item interp_algo
+The interpolation algorithm used for resizing. One of the following:
+@table @option
+@item nn
+Nearest neighbour.
 
-@item matrix, m
-Set the colorspace matrix.
+@item linear
+@item cubic
+@item cubic2p_bspline
+2-parameter cubic (B=1, C=0)
 
-Possible value are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 470bg
-@item 170m
-@item 2020_ncl
-@item 2020_cl
-@end table
+@item cubic2p_catmullrom
+2-parameter cubic (B=0, C=1/2)
 
-Default is same as input.
+@item cubic2p_b05c03
+2-parameter cubic (B=1/2, C=3/10)
 
-@item rangein, rin
-Set the input color range.
+@item super
+Supersampling
 
-Possible values are:
-@table @var
-@item input
-@item limited
-@item full
+@item lanczos
 @end table
 
-Default is same as input.
-
-@item primariesin, pin
-Set the input color primaries.
+@item force_original_aspect_ratio
+Enable decreasing or increasing output video width or height if necessary to
+keep the original aspect ratio. Possible values:
 
-Possible values are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 170m
-@item 240m
-@item 2020
-@end table
+@table @samp
+@item disable
+Scale the video as specified and disable this feature.
 
-Default is same as input.
+@item decrease
+The output video dimensions will automatically be decreased if needed.
 
-@item transferin, tin
-Set the input transfer characteristics.
+@item increase
+The output video dimensions will automatically be increased if needed.
 
-Possible values are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 601
-@item linear
-@item 2020_10
-@item 2020_12
 @end table
 
-Default is same as input.
+One useful instance of this option is that when you know a specific device's
+maximum allowed resolution, you can use this to limit the output video to
+that, while retaining the aspect ratio. For example, device A allows
+1280x720 playback, and your video is 1920x800. Using this option (set it to
+decrease) and specifying 1280x720 to the command line makes the output
+1280x533.
 
-@item matrixin, min
-Set the input colorspace matrix.
+Please note that this is a different thing than specifying -1 for @option{w}
+or @option{h}, you still need to specify the output resolution for this option
+to work.
 
-Possible value are:
-@table @var
-@item input
-@item 709
-@item unspecified
-@item 470bg
-@item 170m
-@item 2020_ncl
-@item 2020_cl
-@end table
+@item force_divisible_by
+Ensures that both the output dimensions, width and height, are divisible by the
+given integer when used together with @option{force_original_aspect_ratio}. 
This
+works similar to using @code{-n} in the @option{w} and @option{h} options.
 
-@item chromal, c
-Set the output chroma location.
+This option respects the value set for @option{force_original_aspect_ratio},
+increasing or decreasing the resolution accordingly. The video's aspect ratio
+may be slightly modified.
 
-Possible values are:
-@table @var
-@item input
-@item left
-@item center
-@item topleft
-@item top
-@item bottomleft
-@item bottom
-@end table
+This option can be handy if you need to have a video fit within or exceed
+a defined resolution using @option{force_original_aspect_ratio} but also have
+encoder restrictions on width or height divisibility.
 
-@item chromalin, cin
-Set the input chroma location.
+@item reset_sar
+Works the same as the identical @ref{scale} filter option.
 
-Possible values are:
-@table @var
-@item input
-@item left
-@item center
-@item topleft
-@item top
-@item bottomleft
-@item bottom
-@end table
+@item eval
+Specify when to evaluate @var{width} and @var{height} expression. It accepts 
the following values:
 
-@item npl
-Set the nominal peak luminance.
+@table @samp
+@item init
+Only evaluate expressions once during the filter initialization or when a 
command is processed.
 
-@item param_a
-Parameter A for scaling filters. Parameter "b" for bicubic, and the number of
-filter taps for lanczos.
+@item frame
+Evaluate expressions for each incoming frame.
+
+@end table
 
-@item param_b
-Parameter B for scaling filters. Parameter "c" for bicubic.
 @end table
 
 The values of the @option{w} and @option{h} options are expressions
@@ -27146,31 +27115,135 @@ input sample aspect ratio
 @item dar
 The input display aspect ratio. Calculated from @code{(iw / ih) * sar}.
 
-@item hsub
-@item vsub
-horizontal and vertical input chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@item n
+The (sequential) number of the input frame, starting from 0.
+Only available with @code{eval=frame}.
 
-@item ohsub
-@item ovsub
-horizontal and vertical output chroma subsample values. For example for the
-pixel format "yuv422p" @var{hsub} is 2 and @var{vsub} is 1.
+@item t
+The presentation timestamp of the input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+@item pos
+The position (byte offset) of the frame in the input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of 
synthetic video).
+Only available with @code{eval=frame}.
+Deprecated, do not use.
 @end table
 
-@subsection Commands
+@subsection scale2ref_npp
 
-This filter supports the following commands:
+Use the NVIDIA Performance Primitives (libnpp) to scale (resize) the input
+video, based on a reference video.
+
+See the @ref{scale_npp} filter for available options, scale2ref_npp supports 
the same
+but uses the reference video instead of the main input as basis. scale2ref_npp
+also supports the following additional constants for the @option{w} and
+@option{h} options:
+
+@table @var
+@item main_w
+@item main_h
+The main input video's width and height
+
+@item main_a
+The same as @var{main_w} / @var{main_h}
+
+@item main_sar
+The main input video's sample aspect ratio
+
+@item main_dar, mdar
+The main input video's display aspect ratio. Calculated from
+@code{(main_w / main_h) * main_sar}.
+
+@item main_n
+The (sequential) number of the main input frame, starting from 0.
+Only available with @code{eval=frame}.
+
+@item main_t
+The presentation timestamp of the main input frame, expressed as a number of
+seconds. Only available with @code{eval=frame}.
+
+@item main_pos
+The position (byte offset) of the frame in the main input stream, or NaN if
+this information is unavailable and/or meaningless (for example in case of 
synthetic video).
+Only available with @code{eval=frame}.
+@end table
+
+@subsubsection Examples
+
+@itemize
+@item
+Scale a subtitle stream (b) to match the main video (a) in size before 
overlaying
+@example
+'scale2ref_npp[b][a];[a][b]overlay_cuda'
+@end example
+
+@item
+Scale a logo to 1/10th the height of a video, while preserving its display 
aspect ratio.
+@example
+[logo-in][video-in]scale2ref_npp=w=oh*mdar:h=ih/10[logo-out][video-out]
+@end example
+@end itemize
+
+@subsection sharpen_npp
+Use the NVIDIA Performance Primitives (libnpp) to perform image sharpening with
+border control.
+
+The following additional options are accepted:
 @table @option
-@item width, w
-@item height, h
-Set the output video dimension expression.
-The command accepts the same syntax of the corresponding option.
 
-If the specified expression is not valid, it is kept at its current
-value.
+@item border_type
+Type of sampling to be used ad frame borders. One of the following:
+@table @option
+
+@item replicate
+Replicate pixel values.
+
+@end table
 @end table
 
-@c man end VIDEO FILTERS
+@subsection transpose_npp
+
+Transpose rows with columns in the input video and optionally flip it.
+For more in depth examples see the @ref{transpose} video filter, which shares 
mostly the same options.
+
+It accepts the following parameters:
+
+@table @option
+
+@item dir
+Specify the transposition direction.
+
+Can assume the following values:
+@table @samp
+@item cclock_flip
+Rotate by 90 degrees counterclockwise and vertically flip. (default)
+
+@item clock
+Rotate by 90 degrees clockwise.
+
+@item cclock
+Rotate by 90 degrees counterclockwise.
+
+@item clock_flip
+Rotate by 90 degrees clockwise and vertically flip.
+@end table
+
+@item passthrough
+Do not apply the transposition if the input geometry matches the one
+specified by the specified value. It accepts the following values:
+@table @samp
+@item none
+Always apply transposition. (default)
+@item portrait
+Preserve portrait geometry (when @var{height} >= @var{width}).
+@item landscape
+Preserve landscape geometry (when @var{width} >= @var{height}).
+@end table
+
+@end table
+
+@c man end CUDA Video Filters
 
 @chapter OpenCL Video Filters
 @c man begin OPENCL VIDEO FILTERS

_______________________________________________
ffmpeg-cvslog mailing list
ffmpeg-cvslog@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-cvslog

To unsubscribe, visit link above, or email
ffmpeg-cvslog-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to