Re: [FFmpeg-devel] [PATCH 2/2] configure: instruct MSVC 2015 to properly process UTF-8 string literals
On 4 February 2017 at 21:23, Hendrik Leppkes wrote: > On Sat, Feb 4, 2017 at 10:29 AM, Carl Eugen Hoyos > wrote: > > 2017-02-04 10:25 GMT+01:00 Hendrik Leppkes : > > > >> Another MSVC workaround is an undocumented compiler pragma, which > >> works in 2013, but not in 2012 (#pragma > >> execution_character_set("utf-8")), but adding pragmas to files is also > >> not something you can just easily hide away without it becoming ugly. > > > > Is there a real disadvantage adding the pragma after your patch > > gets applied? > > > > Adding pragmas is ugly because you can't hide them away, but its > independent of this patch either way, since it targets a different > compiler version. > > Applied this patch in the meantime, as well as 1/2 from the set. > Well given that the pragma doesn't work in 2012 and we are still claiming support for those older msvc versions then it wouldn't really be a suitable fix anyway. As far as I see it it the only way to cover all currently supported msvc versions would be to convert the utf chars directly to binary. Which is a bit ugly granted, but im not sure there is anyway around it. MSVC appears to output a warning about characters that dont fit in the current code page and at the moment only ccaption_dec.c is mentioned so it appears tat luckily only 1 file needs to be changed. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] avfilter/af_pan: fix null pointer dereference on empty token
Le septidi 17 pluviôse, an CCXXV, Marton Balint a écrit : > Fixes Coverity CID 1396254. > > Signed-off-by: Marton Balint > --- > libavfilter/af_pan.c | 5 + > 1 file changed, 5 insertions(+) > > diff --git a/libavfilter/af_pan.c b/libavfilter/af_pan.c > index 94f1587..00eef2b 100644 > --- a/libavfilter/af_pan.c > +++ b/libavfilter/af_pan.c > @@ -115,6 +115,11 @@ static av_cold int init(AVFilterContext *ctx) > if (!args) > return AVERROR(ENOMEM); > arg = av_strtok(args, "|", &tokenizer); > +if (!arg) { > +av_log(ctx, AV_LOG_ERROR, "Cannot tokenize argument\n"); > +ret = AVERROR(EINVAL); > +goto fail; > +} Thanks for catching this. The fix seems correct. The error message, on the other hand, is not good: it is meant for users but does not tell them anything. If I read the code correctly, this can only be triggered if the argument to pan contains only the delimiter character. Something like "channel layout not specified" would be more useful. > ret = ff_parse_channel_layout(&pan->out_channel_layout, >&pan->nb_output_channels, arg, ctx); > if (ret < 0) Regards, -- Nicolas George signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] How to set avcodec_open2 to set ((MpegEncContext *)(avctx->priv_data))->fixed_qscale = 1?
Hi , I'm trying to push a stream to a server with ffmpeg. I debugged the ffmpeg code, and found that when avcode_open2 returned, the ((MpegEncContext *)(avctx->priv_data))->fixed_qscale = 1. That to say, when video stream was handled, certain offset of avctx->priv_data pointer (just fixed_qscale field) was set to 1. I tried to find which parameter would lead to this result, but failed. If fixed_qscale was not set to 1, program will not run to "Error evaluating rc_eq """ problem. Can someone give an advice? Thanks for help ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder
On Fri, Feb 03, 2017 at 08:21:37PM +0100, wm4 wrote: > With your special use-case (special as in does not fit into the API > conventions of libavcodec), you might be better off with creating your > own standalone cinepak decoder. That's not a bad thing; there's plenty > of multimedia software that does not use libavcodec. Part of the reason > is that one library can't make everyone happy and can't fit all > use-cases. To make it a bit more clear, my use case is - various devices and videobuffers - different applications which are not feasible to patch/teach/replace Replacing ffmpeg with something else or modifying the applications unfortunately does not fit there, that's also why get_format() does not help. Fortunately there is a solution for controlling the libraries out-of-band even if it is not exactly elegant (I am more than well aware of implications when using environment variables). If anything better would exist I'd use it. This matter belongs otherwise to system- and deployment administraton, offtopic to discuss here but you can safely take my word. This does not have to be "supported" by ffmpeg but it is useful, not harmful to have the corresponding code at hand, at least as a reference (disabled as it is). An improvement would be namespacing the environment variables to be used together with ffmpeg libraries, but as long as such use is strongly discouraged there is no possibility to allocate a namespace. Otherwise, you see, ffmpeg fits very well for the needs in my environment, it _does_ make people happy. > Back to being "constructive" - the only way your code could get > accepted is by implementing the get_format callback. There's a Sure. Indeed, activating get_format() can become useful for someone, I should not have omitted it. This is a tiny change. I am now posting an amended and additionally commented patch. Regards, Rune ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] Cinepak: speed up decoding several-fold, depending on the scenario, by supporting multiple output pixel formats.
Hello, Here comes an amended patch, I think all the relevant points in the discussion have been addressed: - maintainability and code duplication: straightforward code templating to reduce duplication would hardly improve its quality, robustness and maintainability; a proper style improvement is aking to a rewrite of the concerned functions instead of the reuse of the previous well tested code; if to be done, this should be done separately * left as-is (further rewrite, outside the scope of the patch) - there is a sound reason for doing pixel format conversion inside the decoder, both specifically because of the Cinepak design/colorspace and also generally because of the design of VQ codecs * left as-is (the main point of the patch) - there is no framework, nor an apparent reason to explicitly synchronize the color conversion constants with some other place in the codebase, the constants by their nature are not likely to need an adjustment * left as-is (easy to correct if/when a need arises) - use of environment variables to influence the behaviour of the libraries in ffmpeg is strongly discouraged * left disabled, as a reference/comment, being in certain situations (like those which motivated the optimizations) the only feasible solution - get_format() functionality was not activated * activated, it is very reasonable to have it working (even though it is not relevant for the needs which motivated the patch itself, it was substandard to omit it) Also added some comments with rationales. I thank everyone for the feedback and hope this code can find its way into upstream. The code grew but hardly the complexity (it has been split into slightly simpler and less interdependent pieces). The efficiency gain looks worth the growth. Regards, Rune >From da5625bd7f71da65c7e3da6604fa997e5dc6 Mon Sep 17 00:00:00 2001 From: Rl Date: Sun, 5 Feb 2017 12:18:27 +0100 Subject: [PATCH] Cinepak: speed up decoding several-fold, depending on the scenario, by supporting multiple output pixel formats. In a simple decoding test on a randomly chosen i686: Decoding to rgb565 is more than 10 times faster than before via default bicubic swscaler about 3 times faster than before via fast bilinear or nearest neighbor swscaler (similar or better numbers for the other new formats) Also decoding to rgb24 is about 3% faster than before decoding to pal8is about 6% faster than before The added output formats: decoding to rgb32 is about 11% faster than to rgb24 decoding to rgb565 is about 12% faster than to rgb24 (yuv420 is approximated by the Cinepak colorspace) decoding to yuv420p is about 6% faster than to rgb24 pal8 input can be used with any output format, pal8 output is still limited to pal8 input The output format can be chosen at runtime by an option. The format can be also chosen by setting environment variable CINEPAK_DECODE_PIXFMT_OVERRIDE, if this is enabled at build time. --- libavcodec/cinepak.c | 1004 -- 1 file changed, 892 insertions(+), 112 deletions(-) diff --git a/libavcodec/cinepak.c b/libavcodec/cinepak.c index d657e9c0c1..386ce98299 100644 --- a/libavcodec/cinepak.c +++ b/libavcodec/cinepak.c @@ -31,6 +31,8 @@ * * Cinepak colorspace support (c) 2013 Rl, Aetey Global Technologies AB * @author Cinepak colorspace, Rl, Aetey Global Technologies AB + * Extra output formats / optimizations (c) 2017 Rl, Aetey Global Technologies AB + * @author Extra output formats / optimizations, Rl, Aetey Global Technologies AB */ #include @@ -39,23 +41,54 @@ #include "libavutil/common.h" #include "libavutil/intreadwrite.h" +#include "libavutil/opt.h" +/* #include "libavutil/avassert.h" */ #include "avcodec.h" #include "internal.h" +/* rounding to nearest; truncation would be slightly faster + * but it noticeably affects the picture quality; + * unless we become extremely desperate to use every single cycle + * we do not bother implementing a choice -- rl */ +#define PACK_RGB_RGB565(r,g,b) (((av_clip_uint8((r)+4)>>3)<<11)|((av_clip_uint8((g)+2)>>2)<<5)|(av_clip_uint8((b)+4)>>3)) -typedef uint8_t cvid_codebook[12]; +/* + * more "desperate/ultimate" optimization possibilites: + * - possibly (hardly?) spare a cycle or two by not ensuring to stay + * inside the frame at vector decoding (the frame is allocated with + * a margin for us as an extra precaution, we can as well use this) + * - skip filling in opacity when it is not needed by the data consumer, + * in many cases rgb32 is almost as fast as rgb565, with full quality, + * improving its speed can make sense + * + * possible style improvements: + * - rewrite the codebook/vector chunk parsing to isolate + * the flags logic (the variable length coding) from the rest, + * so that it could be templated safely - a tiny piece but + * it is being repeated many ti
Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder
On 05/02/17 10:02, u-9...@aetey.se wrote: > On Fri, Feb 03, 2017 at 08:21:37PM +0100, wm4 wrote: >> With your special use-case (special as in does not fit into the API >> conventions of libavcodec), you might be better off with creating your >> own standalone cinepak decoder. That's not a bad thing; there's plenty >> of multimedia software that does not use libavcodec. Part of the reason >> is that one library can't make everyone happy and can't fit all >> use-cases. > > To make it a bit more clear, my use case is > > - various devices and videobuffers > - different applications which are not feasible to patch/teach/replace > > Replacing ffmpeg with something else or modifying the applications > unfortunately does not fit there, that's also why get_format() does not help. Even if you need to support such a use-case, doing it via the get_format() callback is still the right thing to do. Once you've done that, all normal applications (including ffmpeg.c itself) can use the standard API as it already exists to set the output format. For your decoding-into-framebuffer case the calling application must already be fully aware of the state of the framebuffer (after all, it has to be able to make a suitable AVFrame to pass to get_buffer2() so that you can avoid the extra copy), so adding get_format() support to also communicate the format is not onerous. Then, if you have a proprietary application which cannot be modified because you don't have the source, you could make a shim layer like: static enum AVPixelFormat get_format_by_env_var(pix_fmt_list) { requested_pix_fmt = getenv(SOMETHING); if (requested_pix_fmt in pix_fmt_list) return requested_pix_fmt; else error; } int avcodec_open2(AVCodecContext *avctx) { avctx->get_format = &get_format_by_env_var; return real_avcodec_open2(avctx); } and LD_PRELOAD it into the application to achieve the same result (insofar as that is possible - it seems unlikely that it will be able to use get_buffer() appropriately, so there are probably going to be more redundant copies in the application and you would need to patch it directly to eliminate them). - Mark ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] How to set avcodec_open2 to set ((MpegEncContext *)(avctx->priv_data))->fixed_qscale = 1?
On Sun, Feb 05, 2017 at 05:29:33PM +0800, sea wrote: > Hi , > > > I'm trying to push a stream to a server with ffmpeg. I debugged the ffmpeg > code, and found that when avcode_open2 returned, the ((MpegEncContext > *)(avctx->priv_data))->fixed_qscale = 1. That to say, when video stream was > handled, certain offset of avctx->priv_data pointer (just fixed_qscale > field) was set to 1. > I tried to find which parameter would lead to this result, but failed. If > fixed_qscale was not set to 1, program will not run to "Error evaluating > rc_eq """ problem. Can someone give an advice? have you looked at doc/examples ? it explains how to use both encoders and decoders I dont really understand what you talk about or what you are trying to do but iam guessing you try to use a decoder or encoder and you use it wrong, which may or may not be the case, you provide too little information [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB In a rich man's house there is no place to spit but his face. -- Diogenes of Sinope signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder
Hello Mark, On Sun, Feb 05, 2017 at 12:12:37PM +, Mark Thompson wrote: > On 05/02/17 10:02, u-9...@aetey.se wrote: > > To make it a bit more clear, my use case is > > > > - various devices and videobuffers > > - different applications which are not feasible to patch/teach/replace > > > > Replacing ffmpeg with something else or modifying the applications > > unfortunately does not fit there, that's also why get_format() does not > > help. > Even if you need to support such a use-case, doing it via the get_format() > callback is still the right thing to do. Once you've done that, all normal > applications (including ffmpeg.c itself) can use the standard API as it > already exists to set the output format. For your decoding-into-framebuffer > case the calling application must already be fully aware of the state of the > framebuffer (after all, it has to be able to make a suitable AVFrame to pass > to get_buffer2() so that you can avoid the extra copy), so adding > get_format() support to also communicate the format is not onerous. Note that it is generally impossible to let the application decide what to choose. Sometimes this may work, but the applications lack the relevant needed knowledge, which is not guessable from "what the layers before and after me report as supported formats in a certain order". > Then, if you have a proprietary application which cannot be modified because > you don't have the source, you could make a shim layer like: Proprietary or not, I can hardly modify them. A shim layer is certainly a possible workaround, but from my perspective it would be "moving from a mildly inelegant to a basically broken and unreliable technology". The problem is the technology of LD_*, an old and really bad design by itself. Compared to a _library_specific_ envvar it is a long way father from being usable as a general solution. Note that an LD_* variable affects _linking_ (which is very intrusive) for _all_ programs, related or not, in all child processes. Note also that some applications do play tricks with LD_* on their own. Have I said enough? :( > static enum AVPixelFormat get_format_by_env_var(pix_fmt_list) > { > requested_pix_fmt = getenv(SOMETHING); > if (requested_pix_fmt in pix_fmt_list) > return requested_pix_fmt; > else > error; > } Exactly, but in my situation it is much more robust and easier to enable the corresponding code in the decoder (or even add it there on my own in the worst case) than play with binary patching on the fly, which dynamic linking basically is. So instead of forcing the possible fellow sysadmins in a similar situation to patch, it would we nice to just let them build lilbavcodec with this slightly non-standard (and pretty safe) behaviour. > and LD_PRELOAD it into the application to achieve the same result (insofar as > that is possible - it seems unlikely that it will be able to use get_buffer() > appropriately, so there are probably going to be more redundant copies in the > application and you would need to patch it directly to eliminate them). You see. In any case, thanks for understanding the original problem. Regards, Rune ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v3] avformat/hlsenc: add hls_flag option to write segments to temporary file until complete
2017-02-05 4:38 GMT+08:00 Bodecs Bela : > > > 2017.02.04. 20:45 keltezéssel, Hendrik Leppkes írta: > >> On Sat, Feb 4, 2017 at 6:55 PM, Aman Gupta wrote: >> >>> From: Aman Gupta >>> >>> Adds a `-hls_flags +temp_file` which will write segment data to >>> filename.tmp, and then rename to filename when the segment is complete. >>> >>> This patch is similar in spirit to one used in Plex's ffmpeg fork, and >>> allows a transcoding webserver to ensure incomplete segment files are >>> never served up accidentally. >>> >> Wouldnt a segment only show up in the playlist when its finished? How >> can they be served accidentally? >> > the segment file names are predictable. > In live streaming enviroment somebody predicting the segment name may see > the program some seconds earlier than any other viewer by downloading > continously the unpublished segment. (with one segment duration seconds > earlier) > if the currently writed segment name is constant you may prohibit to acces > it in web server config > > - Hendrik >> ___ >> ffmpeg-devel mailing list >> ffmpeg-devel@ffmpeg.org >> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel >> > > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > will push after 24 hours ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] omx: Add support for specifying H.264 profile
This adds "-profile[:v] profile_name"-style option IOW. "constrained," "baseline," "main" and "high" will work well on Raspbian Jessie / RasPi3B. The others aren't checked due to unsupported. --- libavcodec/omx.c | 91 1 file changed, 91 insertions(+) diff --git a/libavcodec/omx.c b/libavcodec/omx.c index 16df50e..31a6627 100644 --- a/libavcodec/omx.c +++ b/libavcodec/omx.c @@ -44,6 +44,20 @@ #include "h264.h" #include "internal.h" +enum { +H264_OMX_PROFILE_CONSTRAINED = 0, +H264_OMX_PROFILE_BASELINE, +H264_OMX_PROFILE_MAIN, +H264_OMX_PROFILE_EXTENDED, +H264_OMX_PROFILE_HIGH, +H264_OMX_PROFILE_HIGH10, +H264_OMX_PROFILE_HIGH10INTRA, +H264_OMX_PROFILE_HIGH422, +H264_OMX_PROFILE_HIGH422INTRA, +H264_OMX_PROFILE_HIGH444, +H264_OMX_PROFILE_HIGH444INTRA, +}; + #ifdef OMX_SKIP64BIT static OMX_TICKS to_omx_ticks(int64_t value) { @@ -226,6 +240,7 @@ typedef struct OMXCodecContext { int output_buf_size; int input_zerocopy; +int profile; } OMXCodecContext; static void append_buffer(pthread_mutex_t *mutex, pthread_cond_t *cond, @@ -523,7 +538,71 @@ static av_cold int omx_component_init(AVCodecContext *avctx, const char *role) CHECK(err); avc.nBFrames = 0; avc.nPFrames = avctx->gop_size - 1; +switch (s->profile) { +case H264_OMX_PROFILE_CONSTRAINED: +avctx->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE; +avc.eProfile = OMX_VIDEO_AVCProfileBaseline; +break; +case H264_OMX_PROFILE_BASELINE: +avctx->profile = FF_PROFILE_H264_BASELINE; +avc.eProfile = OMX_VIDEO_AVCProfileBaseline; +avc.bEnableFMO = 1; +avc.bEnableASO = 1; +avc.bEnableRS = 1; +break; +case H264_OMX_PROFILE_MAIN: +avctx->profile = FF_PROFILE_H264_MAIN; +avc.eProfile = OMX_VIDEO_AVCProfileMain; +break; +case H264_OMX_PROFILE_EXTENDED: +avctx->profile = FF_PROFILE_H264_EXTENDED; +avc.eProfile = OMX_VIDEO_AVCProfileExtended; +avc.bEnableFMO = 1; +avc.bEnableASO = 1; +avc.bEnableRS = 1; +break; +case H264_OMX_PROFILE_HIGH: +avctx->profile = FF_PROFILE_H264_HIGH; +avc.eProfile = OMX_VIDEO_AVCProfileHigh; +break; +case H264_OMX_PROFILE_HIGH10: +avctx->profile = FF_PROFILE_H264_HIGH_10; +avc.eProfile = OMX_VIDEO_AVCProfileHigh10; +break; +case H264_OMX_PROFILE_HIGH10INTRA: +avctx->profile = FF_PROFILE_H264_HIGH_10_INTRA; +avc.eProfile = OMX_VIDEO_AVCProfileHigh10; +avc.bconstIpred = 1; +break; +case H264_OMX_PROFILE_HIGH422: +avctx->profile = FF_PROFILE_H264_HIGH_422; +avc.eProfile = OMX_VIDEO_AVCProfileHigh422; +break; +case H264_OMX_PROFILE_HIGH422INTRA: +avctx->profile = FF_PROFILE_H264_HIGH_422_INTRA; +avc.eProfile = OMX_VIDEO_AVCProfileHigh422; +avc.bconstIpred = 1; +break; +case H264_OMX_PROFILE_HIGH444: +avctx->profile = FF_PROFILE_H264_HIGH_444; +avc.eProfile = OMX_VIDEO_AVCProfileHigh444; +break; +case H264_OMX_PROFILE_HIGH444INTRA: +avctx->profile = FF_PROFILE_H264_HIGH_444_INTRA; +avc.eProfile = OMX_VIDEO_AVCProfileHigh444; +avc.bconstIpred = 1; +break; +default: +av_log(avctx, AV_LOG_ERROR, "Unknown profile %d\n", s->profile); +return AVERROR_UNKNOWN; +} err = OMX_SetParameter(s->handle, OMX_IndexParamVideoAvc, &avc); +switch (err) { +case OMX_ErrorUnsupportedSetting: +case OMX_ErrorUnsupportedIndex: +av_log(avctx, AV_LOG_ERROR, "Unsupported profile %d\n", s->profile); +return AVERROR_UNKNOWN; +} CHECK(err); } @@ -884,6 +963,18 @@ static const AVOption options[] = { { "omx_libname", "OpenMAX library name", OFFSET(libname), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VDE }, { "omx_libprefix", "OpenMAX library prefix", OFFSET(libprefix), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VDE }, { "zerocopy", "Try to avoid copying input frames if possible", OFFSET(input_zerocopy), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, VE }, +{ "profile", "Set the encoding profile", OFFSET(profile), AV_OPT_TYPE_INT, { .i64 = H264_OMX_PROFILE_HIGH }, H264_OMX_PROFILE_CONSTRAINED, H264_OMX_PROFILE_HIGH444INTRA, VE, "profile" }, +{ "constrained", "", 0, AV_OPT_TYPE_CONST, { .i64 = H264_OMX_PROFILE_CONSTRAINED }, 0, 0, VE, "profile" }, +{ "baseline", "", 0, AV_OPT_TYPE_CONST, { .i64 = H264_OMX_PROFILE_BASELINE }, 0, 0, VE, "profile" }, +
Re: [FFmpeg-devel] [PATCH] omx: Add support for specifying H.264 profile
On 04/02/17 20:26, Takayuki 'January June' Suwa wrote: > This adds "-profile[:v] profile_name"-style option IOW. > "constrained," "baseline," "main" and "high" will work well on Raspbian > Jessie / RasPi3B. > The others aren't checked due to unsupported. The idea of this patch looks sensible to me, but I'm not sure about some of the profile details. > --- > libavcodec/omx.c | 91 > > 1 file changed, 91 insertions(+) > > diff --git a/libavcodec/omx.c b/libavcodec/omx.c > index 16df50e..31a6627 100644 > --- a/libavcodec/omx.c > +++ b/libavcodec/omx.c > @@ -44,6 +44,20 @@ > #include "h264.h" > #include "internal.h" > > +enum { > +H264_OMX_PROFILE_CONSTRAINED = 0, > +H264_OMX_PROFILE_BASELINE, > +H264_OMX_PROFILE_MAIN, > +H264_OMX_PROFILE_EXTENDED, > +H264_OMX_PROFILE_HIGH, > +H264_OMX_PROFILE_HIGH10, > +H264_OMX_PROFILE_HIGH10INTRA, > +H264_OMX_PROFILE_HIGH422, > +H264_OMX_PROFILE_HIGH422INTRA, > +H264_OMX_PROFILE_HIGH444, > +H264_OMX_PROFILE_HIGH444INTRA, > +}; You don't need to make a new enum here - FF_PROFILE_* already contains the values you want. > + > #ifdef OMX_SKIP64BIT > static OMX_TICKS to_omx_ticks(int64_t value) > { > @@ -226,6 +240,7 @@ typedef struct OMXCodecContext { > int output_buf_size; > > int input_zerocopy; > +int profile; > } OMXCodecContext; > > static void append_buffer(pthread_mutex_t *mutex, pthread_cond_t *cond, > @@ -523,7 +538,71 @@ static av_cold int omx_component_init(AVCodecContext > *avctx, const char *role) > CHECK(err); > avc.nBFrames = 0; > avc.nPFrames = avctx->gop_size - 1; > +switch (s->profile) { > +case H264_OMX_PROFILE_CONSTRAINED: > +avctx->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE; > +avc.eProfile = OMX_VIDEO_AVCProfileBaseline; How does OpenMAX divine that constraint_set1_flag should be set in this case? > +break; > +case H264_OMX_PROFILE_BASELINE: > +avctx->profile = FF_PROFILE_H264_BASELINE; > +avc.eProfile = OMX_VIDEO_AVCProfileBaseline; > +avc.bEnableFMO = 1; > +avc.bEnableASO = 1; > +avc.bEnableRS = 1; None of these options make any sense without additional configuration. Since noone uses these features anyway, it is probably easier to just not bother with baseline profile at all since it can only cause confusion. (I'm assuming that using baseline profile on your device actually produces a stream conforming to constrained baseline profile such that not setting constraint_set1_flag would just be unhelpful - indeed, maybe it always sets it. If this is wrong (e.g. if ffmpeg is unable to decode the output, since it doesn't support baseline profile) then please do say so, and also share samples of the output.) > +break; > +case H264_OMX_PROFILE_MAIN: > +avctx->profile = FF_PROFILE_H264_MAIN; > +avc.eProfile = OMX_VIDEO_AVCProfileMain; > +break; > +case H264_OMX_PROFILE_EXTENDED: > +avctx->profile = FF_PROFILE_H264_EXTENDED; > +avc.eProfile = OMX_VIDEO_AVCProfileExtended; > +avc.bEnableFMO = 1; > +avc.bEnableASO = 1; > +avc.bEnableRS = 1; As for baseline, I don't think this is useful. > +break; > +case H264_OMX_PROFILE_HIGH: > +avctx->profile = FF_PROFILE_H264_HIGH; > +avc.eProfile = OMX_VIDEO_AVCProfileHigh; > +break; > +case H264_OMX_PROFILE_HIGH10: > +avctx->profile = FF_PROFILE_H264_HIGH_10; > +avc.eProfile = OMX_VIDEO_AVCProfileHigh10; > +break; Isn't this going to want a different pixel format on input? Only YUV420P is currently in the list of usable pixfmts. Similarly for all of the other non-8-bit-4:2:0 profiles. > +case H264_OMX_PROFILE_HIGH10INTRA: > +avctx->profile = FF_PROFILE_H264_HIGH_10_INTRA; > +avc.eProfile = OMX_VIDEO_AVCProfileHigh10; > +avc.bconstIpred = 1; The OpenMAX IL 1.1.2 specification says: "bconstIpred is a Boolean value that enables or disables intra-prediction." - I would hope that intra prediction would always be enabled, not doing so would be madness. From the name it sounds more like it might control constrained_intra_pred_flag, though it would be useful to have some clarification before setting it. In any case, it doesn't appear to be relevant to the profile. So, since the eProfile value is the same as for high 10, how will OpenMAX divine that constraint_set3_flag should be set? Similarly for the other intra profiles below. > +break; > +case H264_OMX_PROFILE_HIGH422: > +avctx->profile = FF_PROFILE_H264_HIGH_422; > +avc.eProfile = OMX_VIDEO_AVCProfileHigh422; > +break; > +case H264_OMX_PROF
Re: [FFmpeg-devel] [PATCH 2/2] avformat/hls: Add subtitle support
Sorry for the test failure, I didn't realise there were tests for HLS demuxer because the tests don't have HLS in their names. I am sending new patches which don't break the tests. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 1/2] avformat/webvttdec: Add support for HLS WebVTT MPEG timestamp map
WebVTT subtitle files in HLS streams contain a header to synchronize the subtitles with the MPEG TS timestamps of the HLS stream. Add an AVOption to prefer MPEG TS style timestamps generated according to the mapping header, if available. Signed-off-by: Franklin Phillips --- libavformat/webvttdec.c | 45 - 1 file changed, 44 insertions(+), 1 deletion(-) diff --git a/libavformat/webvttdec.c b/libavformat/webvttdec.c index 0aeb8a6..329b25e 100644 --- a/libavformat/webvttdec.c +++ b/libavformat/webvttdec.c @@ -35,6 +35,7 @@ typedef struct { const AVClass *class; FFDemuxSubtitlesQueue q; int kind; +int prefer_hls_mpegts_pts; } WebVTTContext; static int webvtt_probe(AVProbeData *p) @@ -57,12 +58,19 @@ static int64_t read_ts(const char *s) return AV_NOPTS_VALUE; } +static int64_t convert_to_hls_mpegts_ts(int64_t timestamp, int64_t offset) +{ + return (timestamp * 90 + offset) & ((1LL << 33) - 1); +} + static int webvtt_read_header(AVFormatContext *s) { WebVTTContext *webvtt = s->priv_data; AVBPrint header, cue; int res = 0; AVStream *st = avformat_new_stream(s, NULL); +int has_hls_timestamp_map = 0; +int64_t hls_ts_offset; if (!st) return AVERROR(ENOMEM); @@ -93,8 +101,36 @@ static int webvtt_read_header(AVFormatContext *s) /* ignore header chunk */ if (!strncmp(p, "\xEF\xBB\xBFWEBVTT", 9) || !strncmp(p, "WEBVTT", 6) || -!strncmp(p, "NOTE", 4)) +!strncmp(p, "NOTE", 4)) { + +if (webvtt->prefer_hls_mpegts_pts) { +/* +* WebVTT files in HLS streams contain a timestamp offset for +* syncing with the main stream: +* +* X-TIMESTAMP-MAP=LOCAL:00:00:00.000,MPEGTS:90 +* (LOCAL and MPEGTS can be reversed even though HLS spec +* does not say so) +*/ + +char *hls_timestamp_map = strstr(p, "\nX-TIMESTAMP-MAP="); +if (hls_timestamp_map) { +char *native_str = strstr(hls_timestamp_map, "LOCAL:"); +char *mpegts_str = strstr(hls_timestamp_map, "MPEGTS:"); +if (native_str && mpegts_str) { +int64_t native_ts = read_ts(native_str + 6); +int64_t mpegts_ts = strtoll(mpegts_str + 7, NULL, 10); + +if (native_ts != AV_NOPTS_VALUE) { +hls_ts_offset = mpegts_ts - native_ts * 90; +has_hls_timestamp_map = 1; +avpriv_set_pts_info(st, 33, 1, 9); +} +} +} +} continue; +} /* optional cue identifier (can be a number like in SRT or some kind of * chaptering id) */ @@ -125,6 +161,12 @@ static int webvtt_read_header(AVFormatContext *s) if ((ts_end = read_ts(p)) == AV_NOPTS_VALUE) break; +if (has_hls_timestamp_map) { +/* convert and truncate to MPEG TS timestamps */ +ts_start = convert_to_hls_mpegts_ts(ts_start, hls_ts_offset); +ts_end = convert_to_hls_mpegts_ts(ts_end, hls_ts_offset); +} + /* optional cue settings */ p += strcspn(p, "\n\t "); while (*p == '\t' || *p == ' ') @@ -200,6 +242,7 @@ static const AVOption options[] = { { "captions", "WebVTT captions kind", 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_CAPTIONS }, INT_MIN, INT_MAX, 0, "webvtt_kind" }, { "descriptions", "WebVTT descriptions kind", 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_DESCRIPTIONS }, INT_MIN, INT_MAX, 0, "webvtt_kind" }, { "metadata", "WebVTT metadata kind", 0, AV_OPT_TYPE_CONST, { .i64 = AV_DISPOSITION_METADATA }, INT_MIN, INT_MAX, 0, "webvtt_kind" }, +{ "prefer_hls_mpegts_pts", "Use WebVTT embedded HLS MPEGTS timestamps if available.", OFFSET(prefer_hls_mpegts_pts), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 1, AV_OPT_FLAG_SUBTITLE_PARAM | AV_OPT_FLAG_DECODING_PARAM }, { NULL } }; -- 2.1.4 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 2/2] avformat/hls: Add subtitle support
Each subtile segment is a WebVTT file and needs to be demuxed separately. These segments also contain a header to synchronize their timing with the MPEG TS stream so those timestamps are requested from the WebVTT demuxer through an AVOption. Signed-off-by: Franklin Phillips --- libavformat/hls.c | 195 -- 1 file changed, 161 insertions(+), 34 deletions(-) diff --git a/libavformat/hls.c b/libavformat/hls.c index 3ae3c7c..fe6fec4 100644 --- a/libavformat/hls.c +++ b/libavformat/hls.c @@ -153,6 +153,8 @@ struct playlist { * playlist, if any. */ int n_init_sections; struct segment **init_sections; + +int is_subtitle; /* Indicates if it's a subtitle playlist */ }; /* @@ -203,6 +205,7 @@ typedef struct HLSContext { char *headers; ///< holds HTTP headers set as an AVOption to the HTTP protocol context char *http_proxy;///< holds the address of the HTTP proxy server AVDictionary *avio_opts; +AVDictionary *demuxer_opts; int strict_std_compliance; } HLSContext; @@ -309,6 +312,8 @@ static struct playlist *new_playlist(HLSContext *c, const char *url, ff_make_absolute_url(pls->url, sizeof(pls->url), base, url); pls->seek_timestamp = AV_NOPTS_VALUE; +pls->is_subtitle = 0; + pls->is_id3_timestamped = -1; pls->id3_mpegts_timestamp = AV_NOPTS_VALUE; @@ -482,11 +487,6 @@ static struct rendition *new_rendition(HLSContext *c, struct rendition_info *inf if (type == AVMEDIA_TYPE_SUBTITLE && !info->uri[0]) return NULL; -/* TODO: handle subtitles (each segment has to parsed separately) */ -if (c->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL) -if (type == AVMEDIA_TYPE_SUBTITLE) -return NULL; - rend = av_mallocz(sizeof(struct rendition)); if (!rend) return NULL; @@ -501,9 +501,14 @@ static struct rendition *new_rendition(HLSContext *c, struct rendition_info *inf /* add the playlist if this is an external rendition */ if (info->uri[0]) { rend->playlist = new_playlist(c, info->uri, url_base); -if (rend->playlist) +if (rend->playlist) { dynarray_add(&rend->playlist->renditions, &rend->playlist->n_renditions, rend); +if (type == AVMEDIA_TYPE_SUBTITLE) { +rend->playlist->is_subtitle = 1; +rend->playlist->is_id3_timestamped = 0; +} +} } if (info->assoc_language[0]) { @@ -1237,24 +1242,20 @@ static int64_t default_reload_interval(struct playlist *pls) pls->target_duration; } -static int read_data(void *opaque, uint8_t *buf, int buf_size) +static int reload_playlist(struct playlist *v, HLSContext *c) { -struct playlist *v = opaque; -HLSContext *c = v->parent->priv_data; -int ret, i; -int just_opened = 0; +int ret = 0; -restart: if (!v->needed) return AVERROR_EOF; if (!v->input) { int64_t reload_interval; -struct segment *seg; /* Check that the playlist is still needed before opening a new * segment. */ -if (v->ctx && v->ctx->nb_streams) { +if ((v->ctx && v->ctx->nb_streams) || v->is_subtitle) { +int i; v->needed = 0; for (i = 0; i < v->n_main_streams; i++) { if (v->main_streams[i]->discard < AVDISCARD_ALL) { @@ -1293,7 +1294,7 @@ reload: v->cur_seq_no = v->start_seq_no; } if (v->cur_seq_no >= v->start_seq_no + v->n_segments) { -if (v->finished) +if (v->finished || v->is_subtitle) return AVERROR_EOF; while (av_gettime_relative() - v->last_load_time < reload_interval) { if (ff_check_interrupt(c->interrupt_callback)) @@ -1303,12 +1304,28 @@ reload: /* Enough time has elapsed since the last reload */ goto reload; } +} -seg = current_segment(v); +return ret; +} +static int read_data_continuous(void *opaque, uint8_t *buf, int buf_size) +{ +struct playlist *v = opaque; +HLSContext *c = v->parent->priv_data; +int ret, just_opened = 0; +struct segment *seg; + +restart: +ret = reload_playlist(v, c); +if (ret < 0) +return ret; +seg = current_segment(v); + +if (!v->input) { /* load/update Media Initialization Section, if any */ ret = update_init_section(v, seg); -if (ret) +if (ret < 0) return ret; ret = open_input(c, v, seg); @@ -1318,7 +1335,7 @@ reload: av_log(v->parent, AV_LOG_WARNING, "Failed to open segment of playlist %d\n", v->index); v->cur_seq_no += 1; -goto reload; +goto restart; } just_opened = 1; } @@ -1331,7 +1348,7 @@ re
Re: [FFmpeg-devel] [PATCH] avfilter/af_pan: fix null pointer dereference on empty token
On Sun, 5 Feb 2017, Nicolas George wrote: Le septidi 17 pluviôse, an CCXXV, Marton Balint a écrit : Fixes Coverity CID 1396254. Signed-off-by: Marton Balint --- libavfilter/af_pan.c | 5 + 1 file changed, 5 insertions(+) diff --git a/libavfilter/af_pan.c b/libavfilter/af_pan.c index 94f1587..00eef2b 100644 --- a/libavfilter/af_pan.c +++ b/libavfilter/af_pan.c @@ -115,6 +115,11 @@ static av_cold int init(AVFilterContext *ctx) if (!args) return AVERROR(ENOMEM); arg = av_strtok(args, "|", &tokenizer); +if (!arg) { +av_log(ctx, AV_LOG_ERROR, "Cannot tokenize argument\n"); +ret = AVERROR(EINVAL); +goto fail; +} Thanks for catching this. The fix seems correct. The error message, on the other hand, is not good: it is meant for users but does not tell them anything. If I read the code correctly, this can only be triggered if the argument to pan contains only the delimiter character. Something like "channel layout not specified" would be more useful. Well, Coverity found it, I only fixed it :) Pushed with the proposed error message. Thanks, Marton ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [rfc] tasks for new ffmpeg developers
people new to the project join #ffmpeg-devel and ask us for help on how to start contributing to ffmpeg. usually none of us have any cohesive answers. mostly suggestions are to review bugs or patches or code. nothing really concise or organized. i've reviewed a few bugs and think these may be good starting points for new developers to try. they might also be useful for qualification tasks in the future? i dont know. i'm not sure how to organize these, maybe a trac keyword? for now , i'll just post them here. if you have comments, suggestions, or just want to tell me that my idea is bad, please speak up. these have specs / open source implementations: https://trac.ffmpeg.org/ticket/5285 Missing subtitle format. ISMT (xml based) https://trac.ffmpeg.org/ticket/4728 support DICOM format (medical jpg image) https://trac.ffmpeg.org/ticket/1959 Support codec2 (os audio codec) https://trac.ffmpeg.org/ticket/2377 Support lossless mp3HD (abandoned? lossless audio codec) ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Cinepak: speed up decoding several-fold, depending on the scenario, by supporting multiple output pixel formats.
On Sun, 5 Feb 2017 12:24:30 +0100 u-9...@aetey.se wrote: > CinepakContext *s = avctx->priv_data; > +#ifdef CINEPAK_DECODE_PIXFMT_OVERRIDE > +char *out_fmt_override = getenv("CINEPAK_DECODE_PIXFMT_OVERRIDE"); > +#endif No. This has been broadly rejected by multiple developers, including myself. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Efficiently support several output pixel formats in Cinepak decoder
On Sun, 5 Feb 2017 14:56:26 +0100 u-9...@aetey.se wrote: > Hello Mark, > > On Sun, Feb 05, 2017 at 12:12:37PM +, Mark Thompson wrote: > > On 05/02/17 10:02, u-9...@aetey.se wrote: > > > To make it a bit more clear, my use case is > > > > > > - various devices and videobuffers > > > - different applications which are not feasible to patch/teach/replace > > > > > > Replacing ffmpeg with something else or modifying the applications > > > unfortunately does not fit there, that's also why get_format() does not > > > help. > > > Even if you need to support such a use-case, doing it via the get_format() > > callback is still the right thing to do. Once you've done that, all normal > > applications (including ffmpeg.c itself) can use the standard API as it > > already exists to set the output format. For your > > decoding-into-framebuffer case the calling application must already be > > fully aware of the state of the framebuffer (after all, it has to be able > > to make a suitable AVFrame to pass to get_buffer2() so that you can avoid > > the extra copy), so adding get_format() support to also communicate the > > format is not onerous. > > Note that it is generally impossible to let the application decide > what to choose. Sometimes this may work, but the applications lack > the relevant needed knowledge, which is not guessable from > "what the layers before and after me report as supported formats > in a certain order". > > > Then, if you have a proprietary application which cannot be modified > > because you don't have the source, you could make a shim layer like: > > Proprietary or not, I can hardly modify them. > > A shim layer is certainly a possible workaround, but from my > perspective it would be "moving from a mildly inelegant > to a basically broken and unreliable technology". > > The problem is > > the technology of LD_*, an old and really bad design by itself. > Compared to a _library_specific_ envvar it is a long way father > from being usable as a general solution. > Note that an LD_* variable affects _linking_ (which is very intrusive) > for _all_ programs, related or not, in all child processes. > Note also that some applications do play tricks with LD_* on their own. > Have I said enough? :( > > > > static enum AVPixelFormat get_format_by_env_var(pix_fmt_list) > > { > > requested_pix_fmt = getenv(SOMETHING); > > if (requested_pix_fmt in pix_fmt_list) > > return requested_pix_fmt; > > else > > error; > > } > > Exactly, but in my situation it is much more robust and easier to > enable the corresponding code in the decoder (or even add it there on > my own in the worst case) than play with binary patching on the fly, > which dynamic linking basically is. > > So instead of forcing the possible fellow sysadmins in a similar situation > to patch, it would we nice to just let them build lilbavcodec with > this slightly non-standard (and pretty safe) behaviour. > > > and LD_PRELOAD it into the application to achieve the same result (insofar > > as that is possible - it seems unlikely that it will be able to use > > get_buffer() appropriately, so there are probably going to be more > > redundant copies in the application and you would need to patch it directly > > to eliminate them). > > You see. > > In any case, thanks for understanding the original problem. I think you don't get it. There will be no such environment variable use in the libraries, ever. The discussion about this was over a long time ago. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Cinepak: speed up decoding several-fold, depending on the scenario, by supporting multiple output pixel formats.
On Sun, Feb 05, 2017 at 12:24:30PM +0100, u-9...@aetey.se wrote: > Hello, > > Here comes an amended patch, I think all the relevant points > in the discussion have been addressed: > > - maintainability and code duplication: > straightforward code templating to reduce duplication > would hardly improve its quality, robustness and maintainability; > a proper style improvement is aking to a rewrite of the concerned > functions instead of the reuse of the previous well tested code; > if to be done, this should be done separately > > * left as-is (further rewrite, outside the scope of the patch) > No, code quality is not outside the scope of your patch. [...] > - use of environment variables to influence the behaviour of the > libraries in ffmpeg is strongly discouraged > > * left disabled, as a reference/comment, being in certain situations > (like those which motivated the optimizations) the only feasible > solution > The use of the environment variable is not tolerable, this is a blocker. [...] > Also added some comments with rationales. > > I thank everyone for the feedback and hope this code can find its way > into upstream. > I'm sorry but there is no way it will reach upstream in this form. Regards, -- Clément B. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] matroska: demux BluRay text subtitles
--- libavformat/matroska.c | 1 + 1 file changed, 1 insertion(+) diff --git a/libavformat/matroska.c b/libavformat/matroska.c index c8e5341..fda96fb 100644 --- a/libavformat/matroska.c +++ b/libavformat/matroska.c @@ -74,6 +74,7 @@ const CodecTags ff_mkv_codec_tags[]={ {"S_VOBSUB" , AV_CODEC_ID_DVD_SUBTITLE}, {"S_DVBSUB" , AV_CODEC_ID_DVB_SUBTITLE}, {"S_HDMV/PGS" , AV_CODEC_ID_HDMV_PGS_SUBTITLE}, +{"S_HDMV/TEXTST", AV_CODEC_ID_HDMV_TEXT_SUBTITLE}, {"V_DIRAC" , AV_CODEC_ID_DIRAC}, {"V_MJPEG" , AV_CODEC_ID_MJPEG}, -- 2.9.3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel