Re: [FFmpeg-devel] size=0, but av_malloc(1)
On Tue, Mar 22, 2016 at 11:43:50PM -0700, Chris Cunningham wrote: > Hey Group, > > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > where I would expect no allocation to be necessary. Why is this being done? Well, what else would you do? None of the alternatives really work any better, so that is the best solution to the problem of sending empty metadata. If you're wondering about why a specific alternative method doesn't use, please say which and we can answer more specifically why it won't work (well). ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] Refund request for FFmpeg at CLT 2016
On date Tuesday 2016-03-22 02:51:33 +0100, Michael Niedermayer encoded: > On Mon, Mar 21, 2016 at 09:49:54PM +0100, Thilo Borgmann wrote: > > Am 21.03.16 um 20:42 schrieb Michael Niedermayer: > > > On Mon, Mar 21, 2016 at 12:14:40PM +0100, Thilo Borgmann wrote: > > >> Hi, > > >> > > >> last weekend, the Chemnitzer Linux Tage in Germany took place and we had > > >> quite a > > >> good experience and contacts to our end-users. > > >> > > > >> Here is my refund request for gas taking Carl Eugen and myself to the > > >> venue. > > >> > > >> I'll send the invoice to Stefano. It is 64,43€. > > LGTM Approved, I will forward the request to SPI, thanks. -- FFmpeg = Fundamental and Fabulous Maxi Powerful Erudite Gorilla ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Implement hdcd filtering
On 3/23/16, Benjamin St wrote: > Isn't this needed by AVFILTER_DEFINE_CLASS ? You dont need it if filter doesnt have options to set. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
Chris Cunningham chromium.org> writes: > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > where I would expect no allocation to be necessary. Why is this being done? > > [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 Iirc, alloc(0) crashes (or crashed) on osx. Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
On Wed, Mar 23, 2016 at 12:25:07PM +, Carl Eugen Hoyos wrote: > Chris Cunningham chromium.org> writes: > > > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > > where I would expect no allocation to be necessary. Why is this being done? > > > > [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 > > Iirc, alloc(0) crashes (or crashed) on osx. > Wasn't it the free(posix_memalign(0)) because the pointer was invalid or something? -- Clément B. signature.asc Description: PGP signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
Hi, On Wed, Mar 23, 2016 at 2:43 AM, Chris Cunningham wrote: > Hey Group, > > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > where I would expect no allocation to be necessary. Why is this being done? > > [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 > [1] > > https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/oggparsevorbis.c#L286 On certain versions of Mac, posix_memalign() with align=32 returns corrupted addresses [1]. This is a workaround for that bug. Ronald [1] http://lists.apple.com/archives/darwin-kernel/2011/Apr/msg00017.html ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
Hi, On Wed, Mar 23, 2016 at 8:32 AM, Ronald S. Bultje wrote: > Hi, > > On Wed, Mar 23, 2016 at 2:43 AM, Chris Cunningham < > chcunning...@chromium.org> wrote: > >> Hey Group, >> >> I'm seeing an interesting pattern [0][1] where we allocate 1 byte in >> places >> where I would expect no allocation to be necessary. Why is this being >> done? >> >> [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 >> [1] >> >> https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/oggparsevorbis.c#L286 > > > On certain versions of Mac, posix_memalign() with align=32 returns > corrupted addresses [1]. This is a workaround for that bug. > Crap, sorry, I meant align=32 and size=0. Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 2/8] lavc: factor apply_param_change() AV_EF_EXPLODE handling
Remove the duplicated code for handling failure of apply_param_change(). --- libavcodec/utils.c | 34 +++--- 1 file changed, 19 insertions(+), 15 deletions(-) diff --git a/libavcodec/utils.c b/libavcodec/utils.c index c625bbc..2436f16 100644 --- a/libavcodec/utils.c +++ b/libavcodec/utils.c @@ -2032,7 +2032,8 @@ static int apply_param_change(AVCodecContext *avctx, AVPacket *avpkt) if (!(avctx->codec->capabilities & AV_CODEC_CAP_PARAM_CHANGE)) { av_log(avctx, AV_LOG_ERROR, "This decoder does not support parameter " "changes, but PARAM_CHANGE side data was sent to it.\n"); -return AVERROR(EINVAL); +ret = AVERROR(EINVAL); +goto fail2; } if (size < 4) @@ -2047,7 +2048,8 @@ static int apply_param_change(AVCodecContext *avctx, AVPacket *avpkt) val = bytestream_get_le32(&data); if (val <= 0 || val > INT_MAX) { av_log(avctx, AV_LOG_ERROR, "Invalid channel count"); -return AVERROR_INVALIDDATA; +ret = AVERROR_INVALIDDATA; +goto fail2; } avctx->channels = val; size -= 4; @@ -2064,7 +2066,8 @@ static int apply_param_change(AVCodecContext *avctx, AVPacket *avpkt) val = bytestream_get_le32(&data); if (val <= 0 || val > INT_MAX) { av_log(avctx, AV_LOG_ERROR, "Invalid sample rate"); -return AVERROR_INVALIDDATA; +ret = AVERROR_INVALIDDATA; +goto fail2; } avctx->sample_rate = val; size -= 4; @@ -2077,13 +2080,20 @@ static int apply_param_change(AVCodecContext *avctx, AVPacket *avpkt) size -= 8; ret = ff_set_dimensions(avctx, avctx->width, avctx->height); if (ret < 0) -return ret; +goto fail2; } return 0; fail: av_log(avctx, AV_LOG_ERROR, "PARAM_CHANGE side data too small.\n"); -return AVERROR_INVALIDDATA; +ret = AVERROR_INVALIDDATA; +fail2: +if (ret < 0) { +av_log(avctx, AV_LOG_ERROR, "Error applying parameter changes.\n"); +if (avctx->err_recognition & AV_EF_EXPLODE) +return ret; +} +return 0; } static int unrefcount_frame(AVCodecInternal *avci, AVFrame *frame) @@ -2158,11 +2168,8 @@ int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *pi (avctx->active_thread_type & FF_THREAD_FRAME)) { int did_split = av_packet_split_side_data(&tmp); ret = apply_param_change(avctx, &tmp); -if (ret < 0) { -av_log(avctx, AV_LOG_ERROR, "Error applying parameter changes.\n"); -if (avctx->err_recognition & AV_EF_EXPLODE) -goto fail; -} +if (ret < 0) +goto fail; avctx->internal->pkt = &tmp; if (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME) @@ -2259,11 +2266,8 @@ int attribute_align_arg avcodec_decode_audio4(AVCodecContext *avctx, AVPacket tmp = *avpkt; int did_split = av_packet_split_side_data(&tmp); ret = apply_param_change(avctx, &tmp); -if (ret < 0) { -av_log(avctx, AV_LOG_ERROR, "Error applying parameter changes.\n"); -if (avctx->err_recognition & AV_EF_EXPLODE) -goto fail; -} +if (ret < 0) +goto fail; avctx->internal->pkt = &tmp; if (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME) -- 2.8.0.rc3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 3/8] lavc: introduce a new decoding/encoding API with decoupled input/output
Until now, the decoding API was restricted to outputting 0 or 1 frames per input packet. It also enforces a somewhat rigid dataflow in general. This new API seeks to relax these restrictions by decoupling input and output. Instead of doing a single call on each decode step, which may consume the packet and may produce output, the new API requires the user to send input first, and then ask for output. For now, there are no codecs supporting this API. The API can work with codecs using the old API, and most code added here it to make them interoperate. The reverse is not possible, although for audio it might. --- libavcodec/avcodec.h | 226 +++ libavcodec/internal.h | 13 +++ libavcodec/utils.c| 285 +- 3 files changed, 522 insertions(+), 2 deletions(-) diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h index 637984b..f654bd2 100644 --- a/libavcodec/avcodec.h +++ b/libavcodec/avcodec.h @@ -73,6 +73,94 @@ */ /** + * @ingroup libavc + * @defgroup lavc_encdec send/receive encoding and decoding API overview + * @{ + * + * The avcodec_send_packet()/avcodec_receive_frame()/avcodec_send_frame()/ + * avcodec_receive_packet() provide a encode/decode API, which decouples input + * and output. + * + * The API is very similar for encoding/decoding and audio/video, and works as + * follows: + * - Setup and open the AVCodecContext as usual. + * - Send valid input: + * - For decoding, call avcodec_send_packet() to give the decoder raw + * compressed data in an AVPacket. + * - For encoding, call avcodec_send_frame() to give the decoder an AVFrame + * containing uncompressed audio or video. + * In both cases, it's recommended that AVPackets and AVFrames are refcounted, + * or libavcodec might have to copy the input data. (libavformat always + * returns refcounted AVPackets, and av_frame_get_buffer() allocates + * refcounted AVFrames.) + * - Receive output in a loop. Periodically call one of the avcodec_receive_*() + * functions and process their output: + * - For decoding, call avcodec_receive_frame(). On success, it will return + * a AVFrame containing uncompressed audio or video data. + * - For encoding, call avcodec_receive_packet(). On success, it will return + * an AVPacket with a compressed frame. + * Repeat this call until it returns AVERROR(EAGAIN) or an error. The + * AVERROR(EAGAIN) return value means that new input data is required to + * return new output. In this case, continue with sending input. For each + * input frame/packet, the codec will typically return 1 output frame/packet, + * but it can also be 0 or more than 1. + * + * At the beginning of decoding or encoding, the codec might accept multiple + * input frames/packets without returning a frame, until its internal buffers + * are filled. This situation is handled transparently if you follow the steps + * outlined above. + * + * End of stream situations. These require "flushing" (aka draining) the codec, + * as the codec might buffer multiple frames or packets internally for + * performance or out of necessity (consider B-frames). + * This is handled as follows: + * - Instead of valid input, send NULL to the avcodec_send_packet() (decoding) + * or avcodec_send_frame() (encoding) functions. This will enter draining + * mode. + * - Call avcodec_receive_frame() (decoding) or avcodec_receive_packet() + * (encoding) in a loop until AVERROR_EOF is returned. The functions will + * not return AVERROR(EAGAIN), unless you forgot to enter draining mode. + * - Before decoding can be resumed again, the codec has to be reset with + * avcodec_flush_buffers(). + * + * Using the API as outlined above is highly recommended. But it's also possible + * to call functions outside of this rigid schema. For example, you can call + * avcodec_send_packet() repeatedly without calling avcodec_receive_frame(). In + * this case, avcodec_send_packet() will succeed until the codec's internal + * buffer has been filled up (which is typically of size 1 per output frame, + * after initial input), and then reject input with AVERROR(EAGAIN). Once it + * starts rejecting input, you have no choice but to read at least some output. + * + * Not all codecs will follow a rigid and predictable dataflow; the only + * guarantee is that an AVERROR(EAGAIN) return value on a send/receive call on + * one end implies that a receive/send call on the other end will succeed. In + * general, no codec will permit unlimited buffering of input or output. + * + * This API replaces the following legacy functions: + * - avcodec_decode_video2() and avcodec_decode_audio4(): + * Use avcodec_send_packet() to feed input to the decoder, then use + * avcodec_receive_frame() to receive decoded frames after each packet. + * Unlike with the old video decoding API, multiple frames might result from + * a packet. For audio, splitting the input packet into fra
[FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
It's not practical to keep this with the new decode API. --- ffmpeg.c | 7 --- ffmpeg.h | 1 - 2 files changed, 8 deletions(-) diff --git a/ffmpeg.c b/ffmpeg.c index 9a14294..bdb0e5e 100644 --- a/ffmpeg.c +++ b/ffmpeg.c @@ -2316,13 +2316,6 @@ static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eo ist->pts = ist->next_pts; ist->dts = ist->next_dts; -if (avpkt.size && avpkt.size != pkt->size && -!(ist->dec->capabilities & AV_CODEC_CAP_SUBFRAMES)) { -av_log(NULL, ist->showed_multi_packet_warning ? AV_LOG_VERBOSE : AV_LOG_WARNING, - "Multiple frames in a packet from stream %d\n", pkt->stream_index); -ist->showed_multi_packet_warning = 1; -} - switch (ist->dec_ctx->codec_type) { case AVMEDIA_TYPE_AUDIO: ret = decode_audio(ist, &avpkt, &got_output); diff --git a/ffmpeg.h b/ffmpeg.h index 403b098..377822c 100644 --- a/ffmpeg.h +++ b/ffmpeg.h @@ -283,7 +283,6 @@ typedef struct InputStream { double ts_scale; int saw_first_ts; -int showed_multi_packet_warning; AVDictionary *decoder_opts; AVRational framerate; /* framerate forced with -r */ int top_field_first; -- 2.8.0.rc3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 6/8] ffmpeg: use new encode API
--- Not so sure about the frame duplication logic etc. --- ffmpeg.c | 71 +--- 1 file changed, 46 insertions(+), 25 deletions(-) diff --git a/ffmpeg.c b/ffmpeg.c index 1f1de8e..a7b07fb 100644 --- a/ffmpeg.c +++ b/ffmpeg.c @@ -788,7 +788,7 @@ static void do_audio_out(AVFormatContext *s, OutputStream *ost, { AVCodecContext *enc = ost->enc_ctx; AVPacket pkt; -int got_packet = 0; +int ret; av_init_packet(&pkt); pkt.data = NULL; @@ -812,13 +812,19 @@ static void do_audio_out(AVFormatContext *s, OutputStream *ost, enc->time_base.num, enc->time_base.den); } -if (avcodec_encode_audio2(enc, &pkt, frame, &got_packet) < 0) { -av_log(NULL, AV_LOG_FATAL, "Audio encoding failed (avcodec_encode_audio2)\n"); -exit_program(1); -} -update_benchmark("encode_audio %d.%d", ost->file_index, ost->index); +ret = avcodec_send_frame(enc, frame); +if (ret < 0) +goto error; + +while (1) { +ret = avcodec_receive_packet(enc, &pkt); +if (ret == AVERROR(EAGAIN)) +break; +if (ret < 0) +goto error; + +update_benchmark("encode_audio %d.%d", ost->file_index, ost->index); -if (got_packet) { av_packet_rescale_ts(&pkt, enc->time_base, ost->st->time_base); if (debug_ts) { @@ -830,6 +836,11 @@ static void do_audio_out(AVFormatContext *s, OutputStream *ost, write_frame(s, &pkt, ost); } + +return; +error: +av_log(NULL, AV_LOG_FATAL, "Audio encoding failed\n"); +exit_program(1); } static void do_subtitle_out(AVFormatContext *s, @@ -1097,7 +1108,7 @@ static void do_video_out(AVFormatContext *s, } else #endif { -int got_packet, forced_keyframe = 0; +int forced_keyframe = 0; double pts_time; if (enc->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME) && @@ -1164,14 +1175,18 @@ static void do_video_out(AVFormatContext *s, ost->frames_encoded++; -ret = avcodec_encode_video2(enc, &pkt, in_picture, &got_packet); -update_benchmark("encode_video %d.%d", ost->file_index, ost->index); -if (ret < 0) { -av_log(NULL, AV_LOG_FATAL, "Video encoding failed\n"); -exit_program(1); -} +ret = avcodec_send_frame(enc, in_picture); +if (ret < 0) +goto error; + +while (1) { +ret = avcodec_receive_packet(enc, &pkt); +update_benchmark("encode_video %d.%d", ost->file_index, ost->index); +if (ret == AVERROR(EAGAIN)) +break; +if (ret < 0) +goto error; -if (got_packet) { if (debug_ts) { av_log(NULL, AV_LOG_INFO, "encoder -> type:video " "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n", @@ -1219,6 +1234,11 @@ static void do_video_out(AVFormatContext *s, av_frame_ref(ost->last_frame, next_picture); else av_frame_free(&ost->last_frame); + +return; +error: +av_log(NULL, AV_LOG_FATAL, "Video encoding failed\n"); +exit_program(1); } static double psnr(double d) @@ -1707,35 +1727,36 @@ static void flush_encoders(void) continue; #endif +if (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO) +continue; + +avcodec_send_frame(enc, NULL); + for (;;) { -int (*encode)(AVCodecContext*, AVPacket*, const AVFrame*, int*) = NULL; -const char *desc; +const char *desc = NULL; switch (enc->codec_type) { case AVMEDIA_TYPE_AUDIO: -encode = avcodec_encode_audio2; desc = "audio"; break; case AVMEDIA_TYPE_VIDEO: -encode = avcodec_encode_video2; desc = "video"; break; default: -stop_encoding = 1; +av_assert0(0); } -if (encode) { +if (1) { AVPacket pkt; int pkt_size; -int got_packet; av_init_packet(&pkt); pkt.data = NULL; pkt.size = 0; update_benchmark(NULL); -ret = encode(enc, &pkt, NULL, &got_packet); +ret = avcodec_receive_packet(enc, &pkt); update_benchmark("flush_%s %d.%d", desc, ost->file_index, ost->index); -if (ret < 0) { +if (ret < 0 && ret != AVERROR_EOF) { av_log(NULL, AV_LOG_FATAL, "%s encoding failed: %s\n", desc, av_err2str(ret)); @@ -1744,7 +1765,7 @@ static void flush_encoders(void) if (ost->logfile && enc->stats_out) {
[FFmpeg-devel] [PATCH v2 0/8] Add decoding/encoding API with decoupled input/output
wm4 (8): lavu: improve documentation of some AVFrame functions lavc: factor apply_param_change() AV_EF_EXPLODE handling lavc: introduce a new decoding/encoding API with decoupled input/output ffmpeg: remove sub-frame warning ffmpeg: use new decode API ffmpeg: use new encode API lavf: use new decode API lavc: add async decoding/encoding API ffmpeg.c | 204 + ffmpeg.h | 1 - libavcodec/avcodec.h | 340 +++- libavcodec/internal.h | 13 ++ libavcodec/utils.c| 349 +++--- libavformat/utils.c | 29 ++--- libavutil/frame.h | 12 ++ tests/ref/fate/cavs | 1 - 8 files changed, 829 insertions(+), 120 deletions(-) -- 2.8.0.rc3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 1/8] lavu: improve documentation of some AVFrame functions
--- libavutil/frame.h | 12 1 file changed, 12 insertions(+) diff --git a/libavutil/frame.h b/libavutil/frame.h index 76a8123..2d6299b 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -591,6 +591,10 @@ void av_frame_free(AVFrame **frame); * If src is not reference counted, new buffers are allocated and the data is * copied. * + * @warning: dst MUST have been either unreferenced with av_frame_unref(dst), + * or newly allocated with av_frame_alloc() before calling this + * function, or undefined behavior will occur. + * * @return 0 on success, a negative AVERROR on error */ int av_frame_ref(AVFrame *dst, const AVFrame *src); @@ -611,6 +615,10 @@ void av_frame_unref(AVFrame *frame); /** * Move everything contained in src to dst and reset src. + * + * @warning: dst is not unreferenced, but directly overwritten without reading + * or deallocating its contents. Call av_frame_unref(dst) manually + * before calling this function to ensure that no memory is leaked. */ void av_frame_move_ref(AVFrame *dst, AVFrame *src); @@ -626,6 +634,10 @@ void av_frame_move_ref(AVFrame *dst, AVFrame *src); * necessary, allocate and fill AVFrame.extended_data and AVFrame.extended_buf. * For planar formats, one buffer will be allocated for each plane. * + * @warning: if frame already has been allocated, calling this function will + * leak memory. In addition, undefined behavior can occur in certain + * cases. + * * @param frame frame in which to store the new buffers. * @param align required buffer size alignment * -- 2.8.0.rc3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 7/8] lavf: use new decode API
--- libavformat/utils.c | 29 - 1 file changed, 12 insertions(+), 17 deletions(-) diff --git a/libavformat/utils.c b/libavformat/utils.c index 67d4d1b..c7b7969 100644 --- a/libavformat/utils.c +++ b/libavformat/utils.c @@ -2814,27 +2814,22 @@ static int try_decode_frame(AVFormatContext *s, AVStream *st, AVPacket *avpkt, (!st->codec_info_nb_frames && (st->codec->codec->capabilities & AV_CODEC_CAP_CHANNEL_CONF { got_picture = 0; -switch (st->codec->codec_type) { -case AVMEDIA_TYPE_VIDEO: -ret = avcodec_decode_video2(st->codec, frame, -&got_picture, &pkt); -break; -case AVMEDIA_TYPE_AUDIO: -ret = avcodec_decode_audio4(st->codec, frame, &got_picture, &pkt); -break; -case AVMEDIA_TYPE_SUBTITLE: -ret = avcodec_decode_subtitle2(st->codec, &subtitle, - &got_picture, &pkt); -ret = pkt.size; -break; -default: -break; +if (st->codec->codec_type == AVMEDIA_TYPE_VIDEO || +st->codec->codec_type == AVMEDIA_TYPE_AUDIO) { +ret = avcodec_send_packet(st->codec, &pkt); +if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) +break; +if (ret >= 0) +pkt.size = 0; +ret = avcodec_receive_frame(st->codec, frame); +if (ret >= 0) +got_picture = 1; +if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) +ret = 0; } if (ret >= 0) { if (got_picture) st->nb_decoded_frames++; -pkt.data += ret; -pkt.size -= ret; ret = got_picture; } } -- 2.8.0.rc3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 8/8] lavc: add async decoding/encoding API
This needs to be explicitly supported by the AVCodec. Async mode can be enabled for AVCodecs without explicit support too, which is treated as if the codec performs all work instantly. --- libavcodec/avcodec.h | 114 ++- libavcodec/utils.c | 30 ++ 2 files changed, 143 insertions(+), 1 deletion(-) diff --git a/libavcodec/avcodec.h b/libavcodec/avcodec.h index f654bd2..437b473 100644 --- a/libavcodec/avcodec.h +++ b/libavcodec/avcodec.h @@ -160,6 +160,39 @@ * @} */ + +/** + * @ingroup libavc + * @defgroup lavc_async send/receive asynchronous mode API overview + * @{ + * + * Asynchronous mode can be enabled by setting AV_CODEC_FLAG2_ASYNC_MODE in + * AVCodecContext.flags2 before opening the decoder. You must set the + * AVCodecContext.async_notification callback as well. + * + * Decoding and encoding work like in synchronous mode, except that: + * - avcodec_send_packet()/avcodec_send_frame()/avcodec_receive_packet()/ + * avcodec_receive_frame() potentially never block, and may return + * AVERROR(EAGAIN) without making any real progress. This breaks the + * guarantee that if e.g. avcodec_send_packet() return AVERROR(EAGAIN), + * that avcodec_receive_frame() will definitely return a frame. + * - If these functions all return AVERROR(EAGAIN), then the codec is doing + * decoding or encoding on a worker thread, and the API user has to wait + * until data is returned. The async_notification callback will be invoked + * by the decoder as soon as there is a change to be expected. + * - To avoid race conditions, the function avcodec_check_async_progress() + * exists. Before going to sleep until async_notification is called, this + * function must be called to determine whether there is actually more + * required input or available output. + * + * Not all codecs support true asynchronous operation. Those which do are + * marked with AV_CODEC_CAP_ASYNC. While other codecs will likely just block + * the caller and do work on the caller's thread, the asynchronous mode API + * will (strictly speaking) still work and fulfill the guarantees given by + * it. + * @} + */ + /** * @defgroup lavc_core Core functions/structures. * @ingroup libavc @@ -910,6 +943,12 @@ typedef struct RcOverride{ * Discard cropping information from SPS. */ #define AV_CODEC_FLAG2_IGNORE_CROP(1 << 16) +/** + * Enable asynchronous encoding/decoding mode. This depends on codec support, + * and will be effectively ignored in most cases. See also AV_CODEC_CAP_ASYNC. + * See asynchronous mode section for details. + */ +#define AV_CODEC_FLAG2_ASYNC_MODE (1 << 17) /** * Show all frames before the first keyframe @@ -1025,6 +1064,17 @@ typedef struct RcOverride{ */ #define AV_CODEC_CAP_VARIABLE_FRAME_SIZE (1 << 16) /** + * This codec has full support for AV_CODEC_FLAG2_ASYNC_MODE. This means it will + * never block the decoder (opening/flushing/closing it might still block, but + * not sending/receiving packets or frames). See asynchronous mode section for + * details. + */ +#define AV_CODEC_CAP_ASYNC (1 << 17) +/** + * This codec requires AV_CODEC_FLAG2_ASYNC_MODE to be set. + */ +#define AV_CODEC_CAP_ASYNC_ONLY (1 << 18) +/** * Codec is intra only. */ #define AV_CODEC_CAP_INTRA_ONLY 0x4000 @@ -1033,7 +1083,6 @@ typedef struct RcOverride{ */ #define AV_CODEC_CAP_LOSSLESS 0x8000 - #if FF_API_WITHOUT_PREFIX /** * Allow decoders to produce frames with data planes that are not aligned @@ -3491,6 +3540,15 @@ typedef struct AVCodecContext { #define FF_SUB_TEXT_FMT_ASS_WITH_TIMINGS 1 #endif +/** + * Notification callback for asynchronous decoding mode. Must be set if + * asynchronous mode is enabled. See asynchronous mode section for details. + * + * The callback can access the avcodec_send_/_receive_* functions, as long + * as the caller guarantees that only one thread accesses the AVCodecContext + * concurrently. Other accesses must be done outside of the callback. + */ +int (*async_notification)(struct AVCodecContext *s); } AVCodecContext; AVRational av_codec_get_pkt_timebase (const AVCodecContext *avctx); @@ -3626,6 +3684,10 @@ typedef struct AVCodec { int (*receive_frame)(AVCodecContext *avctx, AVFrame *frame); int (*receive_packet)(AVCodecContext *avctx, AVPacket *avpkt); /** + * Required for AV_CODEC_CAP_ASYNC. Behaves like avcodec_check_async_progress(). + */ +int (*check_async_progress)(AVCodecContext *avctx); +/** * Flush buffers. * Will be called when seeking */ @@ -4676,6 +4738,56 @@ int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame); */ int avcodec_receive_packet(AVCodecContext *avctx, AVPacket *avpkt); +/** + * Determine whether there is still work to do (sending input or receiving + * output). See asynchronous mode section for an overview. + * + * If t
[FFmpeg-devel] [PATCH v2 5/8] ffmpeg: use new decode API
This is a bit messy, mainly due to timestamp handling. decode_video() relied on the fact that it could set dts on a flush/drain packet. This is not possible with the old API, and won't be. (I think doing this was very questionable with the old API. Flush packets should not contain any information; they just cause a FIFO to be emptied.) This is replaced with checking the best_effort_timestamp for AV_NOPTS_VALUE, and using the suggested DTS in the drain case. The fate-cavs test still fails due to dropping the last frame. This happens because the timestamp of the last frame goes backwards (ffprobe -show_frames shows the same thing). I suspect that this "worked" due to the best effort timestamp logic picking the DTS over the decreasing PTS. Since this logic is in libavcodec (where it probably shouldn't be), this can't be easily fixed. The timestamps of the cavs samples are weird anyway, so I chose not to fix it. Another strange thing is the timestamp handling in the video path of process_input_packet (after the decode_video() call). It looks like the code to increase next_dts and next_pts should be run every time a frame is decoded - but it's needed even if output is skipped. --- ffmpeg.c| 126 +++- tests/ref/fate/cavs | 1 - 2 files changed, 75 insertions(+), 52 deletions(-) diff --git a/ffmpeg.c b/ffmpeg.c index bdb0e5e..1f1de8e 100644 --- a/ffmpeg.c +++ b/ffmpeg.c @@ -1935,6 +1935,33 @@ static void check_decode_result(InputStream *ist, int *got_output, int ret) } } +// This does not quite work like avcodec_decode_audio4/avcodec_decode_video2. +// There is the following difference: if you got a frame, you must call +// it again with pkt=NULL. pkt==NULL is treated differently from pkt.size==0 +// (pkt==NULL means get more output, pkt.size==0 is a flush/drain packet) +static int decode(AVCodecContext *avctx, AVFrame *frame, int *got_frame, AVPacket *pkt) +{ +int ret; + +*got_frame = 0; + +if (pkt) { +ret = avcodec_send_packet(avctx, pkt); +// In particular, we don't expect AVERROR(EAGAIN), because we read all +// decoded frames with avcodec_receive_frame() until done. +if (ret < 0) +return ret == AVERROR_EOF ? 0 : ret; +} + +ret = avcodec_receive_frame(avctx, frame); +if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) +return ret; +if (ret >= 0) +*got_frame = 1; + +return 0; +} + static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output) { AVFrame *decoded_frame, *f; @@ -1949,7 +1976,7 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output) decoded_frame = ist->decoded_frame; update_benchmark(NULL); -ret = avcodec_decode_audio4(avctx, decoded_frame, got_output, pkt); +ret = decode(avctx, decoded_frame, got_output, pkt); update_benchmark("decode_audio %d.%d", ist->file_index, ist->st->index); if (ret >= 0 && avctx->sample_rate <= 0) { @@ -2025,14 +2052,13 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output) } else if (decoded_frame->pkt_pts != AV_NOPTS_VALUE) { decoded_frame->pts = decoded_frame->pkt_pts; decoded_frame_tb = ist->st->time_base; -} else if (pkt->pts != AV_NOPTS_VALUE) { +} else if (pkt && pkt->pts != AV_NOPTS_VALUE) { decoded_frame->pts = pkt->pts; decoded_frame_tb = ist->st->time_base; }else { decoded_frame->pts = ist->dts; decoded_frame_tb = AV_TIME_BASE_Q; } -pkt->pts = AV_NOPTS_VALUE; if (decoded_frame->pts != AV_NOPTS_VALUE) decoded_frame->pts = av_rescale_delta(decoded_frame_tb, decoded_frame->pts, (AVRational){1, avctx->sample_rate}, decoded_frame->nb_samples, &ist->filter_in_rescale_delta_last, @@ -2060,23 +2086,28 @@ static int decode_audio(InputStream *ist, AVPacket *pkt, int *got_output) return err < 0 ? err : ret; } -static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output) +static int decode_video(InputStream *ist, AVPacket *pkt, int *got_output, int eof) { AVFrame *decoded_frame, *f; int i, ret = 0, err = 0, resample_changed; int64_t best_effort_timestamp; +int64_t dts; AVRational *frame_sample_aspect; +AVPacket avpkt; if (!ist->decoded_frame && !(ist->decoded_frame = av_frame_alloc())) return AVERROR(ENOMEM); if (!ist->filter_frame && !(ist->filter_frame = av_frame_alloc())) return AVERROR(ENOMEM); decoded_frame = ist->decoded_frame; -pkt->dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ist->st->time_base); +dts = av_rescale_q(ist->dts, AV_TIME_BASE_Q, ist->st->time_base); +if (pkt) { +avpkt = *pkt; +avpkt.dts = dts; +} update_benchmark(NULL); -ret = avcodec_decode_video2(ist->dec_ctx, -de
Re: [FFmpeg-devel] [PATCH v2 1/8] lavu: improve documentation of some AVFrame functions
wm4 googlemail.com> writes: > + * warning: if frame already has been allocated, calling this function will > + * leak memory. In addition, undefined behavior can occur in certain > + * cases. If this is correct, I believe the following is a slightly better wording: if frame already has been allocated undefined behaviour including memory leaks can occur. Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 3/8] lavc: introduce a new decoding/encoding API with decoupled input/output
wm4 googlemail.com> writes: > Until now, the decoding API was restricted to outputting 0 or 1 frames > per input packet. It also enforces a somewhat rigid dataflow in general. > > This new API seeks to relax these restrictions by decoupling input and > output. Instead of doing a single call on each decode step, which may > consume the packet and may produce output, the new API requires the user > to send input first, and then ask for output. Is the new API supposed to have a performance impact? Or is it just simpler than the existing one? (Is it simpler?) Thank you, Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 3/8] lavc: introduce a new decoding/encoding API with decoupled input/output
On Wed, Mar 23, 2016 at 2:45 PM, Carl Eugen Hoyos wrote: > wm4 googlemail.com> writes: > >> Until now, the decoding API was restricted to outputting 0 or 1 frames >> per input packet. It also enforces a somewhat rigid dataflow in general. >> >> This new API seeks to relax these restrictions by decoupling input and >> output. Instead of doing a single call on each decode step, which may >> consume the packet and may produce output, the new API requires the user >> to send input first, and then ask for output. > > Is the new API supposed to have a performance impact? It can allow decoders to run faster, but it will not make things faster by itself - decoders need to be changed to take advantage of it, but its a first step. - Hendrik ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] lavf/segment: support automatic bitstream filtering
Most useful for MPEG-TS. Works by having the underlying muxer configure the bitstream filters, then moving them to our own AVStreams. --- libavformat/segment.c | 43 ++- 1 file changed, 38 insertions(+), 5 deletions(-) diff --git a/libavformat/segment.c b/libavformat/segment.c index 960a438..d912692 100644 --- a/libavformat/segment.c +++ b/libavformat/segment.c @@ -628,13 +628,10 @@ static void seg_free_context(SegmentContext *seg) seg->avf = NULL; } -static int seg_write_header(AVFormatContext *s) +static int seg_init(AVFormatContext *s) { SegmentContext *seg = s->priv_data; -AVFormatContext *oc = NULL; -AVDictionary *options = NULL; int ret; -int i; seg->segment_count = 0; if (!seg->write_header_trailer) @@ -702,6 +699,7 @@ static int seg_write_header(AVFormatContext *s) seg->use_rename = proto && !strcmp(proto, "file"); } } + if (seg->list_type == LIST_TYPE_EXT) av_log(s, AV_LOG_WARNING, "'ext' list type option is deprecated in favor of 'csv'\n"); @@ -726,11 +724,25 @@ static int seg_write_header(AVFormatContext *s) if ((ret = segment_mux_init(s)) < 0) goto fail; -oc = seg->avf; if ((ret = set_segment_filename(s)) < 0) goto fail; +fail: +if (ret < 0) +seg_free_context(seg); + +return ret; +} + +static int seg_write_header(AVFormatContext *s) +{ +SegmentContext *seg = s->priv_data; +AVFormatContext *oc = seg->avf; +AVDictionary *options = NULL; +int ret; +int i; + if (seg->write_header_trailer) { if ((ret = s->io_open(s, &oc->pb, seg->header_filename ? seg->header_filename : oc->filename, @@ -944,6 +956,23 @@ fail: return ret; } +static int seg_check_bitstream(struct AVFormatContext *s, const AVPacket *pkt) +{ +SegmentContext *seg = s->priv_data; +AVFormatContext *oc = seg->avf; +if (oc->oformat->check_bitstream) { +int ret = oc->oformat->check_bitstream(oc, pkt); +if (ret == 1) { +AVStream *st = s->streams[pkt->stream_index]; +AVStream *ost = oc->streams[pkt->stream_index]; +st->internal->bsfc = ost->internal->bsfc; +ost->internal->bsfc = NULL; +} +return ret; +} +return 1; +} + #define OFFSET(x) offsetof(SegmentContext, x) #define E AV_OPT_FLAG_ENCODING_PARAM static const AVOption options[] = { @@ -1001,9 +1030,11 @@ AVOutputFormat ff_segment_muxer = { .long_name = NULL_IF_CONFIG_SMALL("segment"), .priv_data_size = sizeof(SegmentContext), .flags = AVFMT_NOFILE|AVFMT_GLOBALHEADER, +.init = seg_init, .write_header = seg_write_header, .write_packet = seg_write_packet, .write_trailer = seg_write_trailer, +.check_bitstream = seg_check_bitstream, .priv_class = &seg_class, }; @@ -1019,8 +1050,10 @@ AVOutputFormat ff_stream_segment_muxer = { .long_name = NULL_IF_CONFIG_SMALL("streaming segment muxer"), .priv_data_size = sizeof(SegmentContext), .flags = AVFMT_NOFILE, +.init = seg_init, .write_header = seg_write_header, .write_packet = seg_write_packet, .write_trailer = seg_write_trailer, +.check_bitstream = seg_check_bitstream, .priv_class = &sseg_class, }; -- 2.7.3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
On Tue, Mar 22, 2016 at 11:43:50PM -0700, Chris Cunningham wrote: > Hey Group, > > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > where I would expect no allocation to be necessary. Why is this being done? > > [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 > [1] > https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/oggparsevorbis.c#L286 to add another reason why malloc(0) _can_ be a problem malloc(0) can return NULL or non NULL whchever way libc prefers this makes reproducing bugreports harder if the developer and user have differening libcs also error checks become more complex if NULL can be a non error return value [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Dictatorship: All citizens are under surveillance, all their steps and actions recorded, for the politicians to enforce control. Democracy: All politicians are under surveillance, all their steps and actions recorded, for the citizens to enforce control. signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Add new test for libavutil/mastering_display_metadata
If I understand well you want me to do a test that given a multimedia file test if it loads works ok. If so how can I get a simple multimedia file? I see this tests more like a unitary tests. In my opinion, test if loading/storing to AVFrame need to be done in AVFrame. Thanks :) From: ffmpeg-devel on behalf of Michael Niedermayer Sent: Tuesday, March 22, 2016 4:17 PM To: FFmpeg development discussions and patches Subject: Re: [FFmpeg-devel] [PATCH] Add new test for libavutil/mastering_display_metadata On Tue, Mar 22, 2016 at 01:30:11PM +, Petru Rares Sincraian wrote: > > Hi there, > > I added a set of tests for libavutil/mastering_display_metadata module. I > attached the patch in this message. > > > Thanks, > Petru Rares. > libavutil/Makefile|1 > libavutil/mastering_display_metadata.c| 68 > ++ > libavutil/mastering_display_metadata.h|1 > tests/fate/libavutil.mak |4 + > tests/ref/fate/mastering_display_metadata |1 > 5 files changed, 74 insertions(+), 1 deletion(-) > 3125db1eb98ac3ad3393e88613a90af79ae812b1 > 0001-Add-selftest-to-libavutil-mastering_display_metadata.patch > From 1e502305f098c9aef852e19e91ddee831cc5ebaf Mon Sep 17 00:00:00 2001 > From: Petru Rares Sincraian > Date: Tue, 22 Mar 2016 11:39:08 +0100 > Subject: [PATCH] Add selftest to libavutil/mastering_display_metadata > > This commit adds tests for functions of libavutil/mastering_display_metadata.c > --- > libavutil/Makefile| 1 + > libavutil/mastering_display_metadata.c| 68 > +++ > libavutil/mastering_display_metadata.h| 1 + > tests/fate/libavutil.mak | 4 ++ > tests/ref/fate/mastering_display_metadata | 0 > 5 files changed, 74 insertions(+) > create mode 100644 tests/ref/fate/mastering_display_metadata > > diff --git a/libavutil/Makefile b/libavutil/Makefile > index 58df75a..3d89335 100644 > --- a/libavutil/Makefile > +++ b/libavutil/Makefile > @@ -198,6 +198,7 @@ TESTPROGS = adler32 > \ > parseutils \ > pixdesc \ > pixelutils \ > +mastering_display_metadata \ > random_seed \ > rational\ > ripemd \ > diff --git a/libavutil/mastering_display_metadata.c > b/libavutil/mastering_display_metadata.c > index e1683e5..8c264a2 100644 > --- a/libavutil/mastering_display_metadata.c > +++ b/libavutil/mastering_display_metadata.c > @@ -41,3 +41,71 @@ AVMasteringDisplayMetadata > *av_mastering_display_metadata_create_side_data(AVFra > > return (AVMasteringDisplayMetadata *)side_data->data; > } > + > +#ifdef TEST > + > +static int check_alloc(void) > +{ > +int result = 0; > +AVMasteringDisplayMetadata *original = > av_mastering_display_metadata_alloc(); > + > +if (original == NULL) { > +printf("Failed to allocate display metadata\n"); > +result = 1; > +} > + > +if (original) > +av_freep(original); > + > +return result; > +} > + > +static int check_create_side_data(void) > +{ > +int result = 0; > +AVFrame *frame = av_frame_alloc(); > +AVMasteringDisplayMetadata *metadata; > +AVFrameSideData *side_data; > + > +if (frame == NULL) { > +printf("Failed to allocate frame"); > +result = 1; > +goto end; > +} > + > +metadata = av_mastering_display_metadata_create_side_data(frame); > +if (metadata == NULL) { > +printf("Failed to create display metadata frame side data"); > +result = 1; > +goto end; > +} > + > +side_data = av_frame_get_side_data(frame, > AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); > +if (side_data == NULL) { > +printf("Failed to get frame side data"); > +result = 1; > +goto end; > +} i think to test side data handling the code should extract and display it based on a sample multimedia file, like using ffprobe. thats also more generic and can work with any side data case that would also test if storing and loading sidedata to a AVFrame works This test here seems very specific, testing adding one specific type of sidedata to a struct and retrieving it again. If sidedata store/load would be tested, all should likely be tested but then that should already be tested by testing some media file [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB If a bugfix only changes things apparently unrelated to the bug with no further explanati
Re: [FFmpeg-devel] [PATCH] Implement hdcd filtering
On Wed, Mar 23, 2016 at 12:39:30PM +0100, Benjamin St wrote: > > > > const? > > > Fixed > > missing new line. > > Fixed > > > > Here and in every other function, { must be on own, separate line like > > in every other filter. > > > Fixed > > Please use FFMIN() > > Fixed > > > +} HDCDContext; > > > + > > > + > > > +static const AVOption hdcd_options[] = { > > > + {NULL} > > > > Please remove this if its not going to be used. > > > Isn't this needed by AVFILTER_DEFINE_CLASS ? > > This will crash if there is >2 channels > > Either limit filter to stereo and mono or allocate this differently. > > > > Also, consider dropping the entire CHANNEL_NUM define, HDCDs will always > > carry a stereo signal, so there's never going to be a need to change > > CHANNEL_NUM. > > > Modified to only accept stereo. > > This is wrong. Either use av_calloc or modify query formats to accepts > > only mono/stereo channel layout. > > Modified to only accept stereo. > > What is hdcd? Could you put it into this long description? > > Should be better know > > Alphabetical order. > > Fixed > > This is getting into #define INCREMENT(x) (x++) territory, could you remove > > it and just use sample *= gain everywhere? > > > Removed it > > No need to specify exactly how many entries the array has when you define > > it, just leave the brackets empty []. It doesn't matter that much, but it > > makes it easier to extend the array later on if you need to. > > Ok, removed it. > > Duplicated ;; > > Removed > > > Thanks for review, Benjamin > Makefile |1 > af_hdcd.c| 1264 > +++ > allfilters.c |1 > 3 files changed, 1266 insertions(+) > 76d7406d2f7732c615981ab9c0689294a37eec72 > 0001-Implement-high-definition-audio-cd-filtering.patch > From 1bc760b84592eff4bb923a23a1415f7d42a4aaf2 Mon Sep 17 00:00:00 2001 > From: Benjamin Steffes > Date: Mon, 21 Mar 2016 23:52:48 +0100 > Subject: [PATCH] Implement high definition audio cd filtering. > > Signed-off-by: Benjamin Steffes > --- > libavfilter/Makefile |1 + > libavfilter/af_hdcd.c| 1264 > ++ > libavfilter/allfilters.c |1 + > 3 files changed, 1266 insertions(+) > create mode 100644 libavfilter/af_hdcd.c > > diff --git a/libavfilter/Makefile b/libavfilter/Makefile > index be4b3c1..2b9bc84 100644 > --- a/libavfilter/Makefile > +++ b/libavfilter/Makefile > @@ -103,6 +103,7 @@ OBJS-$(CONFIG_TREMOLO_FILTER)+= > af_tremolo.o > OBJS-$(CONFIG_VIBRATO_FILTER)+= af_vibrato.o > generate_wave_table.o > OBJS-$(CONFIG_VOLUME_FILTER) += af_volume.o > OBJS-$(CONFIG_VOLUMEDETECT_FILTER) += af_volumedetect.o > +OBJS-$(CONFIG_HDCD_FILTER) += af_hdcd.o > > OBJS-$(CONFIG_AEVALSRC_FILTER) += aeval.o > OBJS-$(CONFIG_ANOISESRC_FILTER) += asrc_anoisesrc.o > diff --git a/libavfilter/af_hdcd.c b/libavfilter/af_hdcd.c > new file mode 100644 > index 000..63ad309 > --- /dev/null > +++ b/libavfilter/af_hdcd.c > @@ -0,0 +1,1264 @@ > +/* > + Copyright (C) 2010, Chris Moeller, > + All rights reserved. > + Optimizations by Gumboot > + Redistribution and use in source and binary forms, with or without > + modification, are permitted provided that the following conditions > + are met: > + 1. Redistributions of source code must retain the above copyright > +notice, this list of conditions and the following disclaimer. > + 2. Redistributions in binary form must reproduce the above copyright > +notice, this list of conditions and the following disclaimer in the > +documentation and/or other materials provided with the distribution. > + 3. The names of its contributors may not be used to endorse or promote > +products derived from this software without specific prior written > +permission. > + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS > + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT > + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR > + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT > OWNER OR > + CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, > + EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, > + PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR > + PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF > + LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING > + NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS > + SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. > +*/ > + > +/* > + Original code reverse engineered from HDCD decoder library by Christopher > Key, > + which was likely reverse engineered from Windows Media Player. Patents, > + trademarks,
Re: [FFmpeg-devel] [PATCH] added support for hardware assist H264 video encoding for the Raspberry Pi
On Tue, Mar 22, 2016 at 06:40:27PM -0700, Amancio Hasty wrote: [...] > +static int vc264_init(AVCodecContext *avctx) { > + > + > + > + OMX_ERRORTYPE r; > + int error; > + > + > + > + VC264Context *vc = avctx->priv_data; > + > + vc->width = avctx->width; > + vc->height = avctx->height; > + vc->bit_rate = avctx->bit_rate; > +#if FF_API_CODED_FRAME > +FF_DISABLE_DEPRECATION_WARNINGS > + > + avctx->coded_frame = av_frame_alloc(); > + avctx->coded_frame->pict_type = AV_PICTURE_TYPE_I; > +FF_ENABLE_DEPRECATION_WARNINGS > +#endif > + > + > + memset(&vc->list, 0, sizeof(vc->list)); > + bcm_host_init(); > + if ((vc->client = ilclient_init()) == NULL) { > + return -3; > + } > + error = OMX_Init(); > + > + if (error != OMX_ErrorNone) { > + ilclient_destroy(vc->client); > + av_log(avctx,AV_LOG_ERROR,"in vc264_init OMX_Init failed "); > + return -4; > +} > + > + // create video_encode > + r = ilclient_create_component(vc->client, &vc->video_encode, (char *) > "video_encode", > + ILCLIENT_DISABLE_ALL_PORTS | > + ILCLIENT_ENABLE_INPUT_BUFFERS | > + ILCLIENT_ENABLE_OUTPUT_BUFFERS); > + > + if (r != 0) { > +av_log(avctx,AV_LOG_ERROR,"ilclient_create_component() for > video_encode failed with %x!", > + r); > + exit(1); a library cannot call exit() also indention looks rather odd and random [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The educated differ from the uneducated as much as the living from the dead. -- Aristotle signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH] sws/aarch64: add ff_hscale_8_to_15_neon
From: Clément Bœsch ./ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null - before: t:0.489726 avg:0.489883 max:0.491852 min:0.489482 after: t:0.259438 avg:0.257707 max:0.260125 min:0.255893 --- I don't really like the roundtrip back to the general purpose register before writing, but well... --- libswscale/aarch64/Makefile | 6 +++-- libswscale/aarch64/hscale.S | 60 +++ libswscale/aarch64/swscale.c | 37 ++ libswscale/swscale.c | 2 ++ libswscale/swscale_internal.h | 1 + libswscale/utils.c| 4 ++- 6 files changed, 107 insertions(+), 3 deletions(-) create mode 100644 libswscale/aarch64/hscale.S create mode 100644 libswscale/aarch64/swscale.c diff --git a/libswscale/aarch64/Makefile b/libswscale/aarch64/Makefile index 823806e..51bff08 100644 --- a/libswscale/aarch64/Makefile +++ b/libswscale/aarch64/Makefile @@ -1,3 +1,5 @@ -OBJS+= aarch64/swscale_unscaled.o +OBJS+= aarch64/swscale.o\ + aarch64/swscale_unscaled.o \ -NEON-OBJS += aarch64/yuv2rgb_neon.o +NEON-OBJS += aarch64/hscale.o \ + aarch64/yuv2rgb_neon.o \ diff --git a/libswscale/aarch64/hscale.S b/libswscale/aarch64/hscale.S new file mode 100644 index 000..89c3fce --- /dev/null +++ b/libswscale/aarch64/hscale.S @@ -0,0 +1,60 @@ +/* + * Copyright (c) 2016 Clément Bœsch + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include "libavutil/aarch64/asm.S" + +function ff_hscale_8_to_15_neon, export=1 +add x10, x4, w6, UXTW #1// filter2 = filter + filterSize*2 (x2 because int16) +1: ldr w8, [x5], #4// filterPos[0] +ldr w9, [x5], #4// filterPos[1] +moviv4.4S, #0 // val sum part 1 (for dst[0]) +moviv5.4S, #0 // val sum part 2 (for dst[1]) +mov w7, w6 // filterSize counter +mov x13, x3 // srcp = src +2: add x11, x13, w8, UXTW // srcp + filterPos[0] +add x12, x13, w9, UXTW // srcp + filterPos[1] +ld1 {v0.8B}, [x11] // srcp[filterPos[0] + {0..7}] +ld1 {v1.8B}, [x12] // srcp[filterPos[1] + {0..7}] +uxtlv0.8H, v0.8B// unpack part 1 to 16-bit +uxtlv1.8H, v1.8B// unpack part 2 to 16-bit +ld1 {v2.8H}, [x4], #16 // load 8x16-bit filter values, part 1 +ld1 {v3.8H}, [x10], #16 // ditto at filter+filterSize for part 2 +smull v6.4S, v0.4H, v2.4H // v6.i32{0..3} = part 1 of: srcp[filterPos[0] + {0..7}] * filter[{0..7}] +smull v8.4S, v1.4H, v3.4H // v8.i32{0..3} = part 1 of: srcp[filterPos[1] + {0..7}] * filter[{0..7}] +smull2 v7.4S, v0.8H, v2.8H // v7.i32{0..3} = part 2 of: srcp[filterPos[0] + {0..7}] * filter[{0..7}] +smull2 v9.4S, v1.8H, v3.8H // v9.i32{0..3} = part 2 of: srcp[filterPos[1] + {0..7}] * filter[{0..7}] +addpv6.4S, v6.4S, v7.4S // horizontal pair adding of the 8x32-bit multiplied values for part 1 into 4x32-bit +addpv8.4S, v8.4S, v9.4S // horizontal pair adding of the 8x32-bit multiplied values for part 2 into 4x32-bit +add v4.4S, v4.4S, v6.4S // update val accumulator for part 1 +add v5.4S, v5.4S, v8.4S // update val accumulator for part 2 +add x13, x13, #8// srcp += 8 +subsw7, w7, #8 // processed 8/filterSize +b.gt2b // inner loop if filterSize not consumed completely +mov x4, x10 // filter = filter2
[FFmpeg-devel] [PATCH 1/2] Refactor libavutil/parseutils.c
All tests were in the main method which produces a long main. Now, each test is in his own method. I think this produces a more clear code and follows more with the main priority of FFmpeg "simplicity and small code size" --- libavutil/parseutils.c | 338 + 1 file changed, 175 insertions(+), 163 deletions(-) diff --git a/libavutil/parseutils.c b/libavutil/parseutils.c index 0097bec..43bd4eb 100644 --- a/libavutil/parseutils.c +++ b/libavutil/parseutils.c @@ -749,180 +749,192 @@ static uint32_t av_get_random_seed_deterministic(void) return randomv = randomv * 1664525 + 1013904223; } -int main(void) +static void test_av_parse_video_rate(void) { -printf("Testing av_parse_video_rate()\n"); -{ -int i; -static const char *const rates[] = { -"-inf", -"inf", -"nan", -"123/0", -"-123 / 0", -"", -"/", -" 123 / 321", -"foo/foo", -"foo/1", -"1/foo", -"0/0", -"/0", -"1/", -"1", -"0", -"-123/123", -"-foo", -"123.23", -".23", -"-.23", -"-0.234", -"-0.001", -" 21332.2324 ", -" -21332.2324 ", -}; - -for (i = 0; i < FF_ARRAY_ELEMS(rates); i++) { -int ret; -AVRational q = { 0, 0 }; -ret = av_parse_video_rate(&q, rates[i]); -printf("'%s' -> %d/%d %s\n", - rates[i], q.num, q.den, ret ? "ERROR" : "OK"); -} +int i; +static const char *const rates[] = { +"-inf", +"inf", +"nan", +"123/0", +"-123 / 0", +"", +"/", +" 123 / 321", +"foo/foo", +"foo/1", +"1/foo", +"0/0", +"/0", +"1/", +"1", +"0", +"-123/123", +"-foo", +"123.23", +".23", +"-.23", +"-0.234", +"-0.001", +" 21332.2324 ", +" -21332.2324 ", +}; + +for (i = 0; i < FF_ARRAY_ELEMS(rates); i++) { +int ret; +AVRational q = { 0, 0 }; +ret = av_parse_video_rate(&q, rates[i]); +printf("'%s' -> %d/%d %s\n", + rates[i], q.num, q.den, ret ? "ERROR" : "OK"); } +} -printf("\nTesting av_parse_color()\n"); -{ -int i; -uint8_t rgba[4]; -static const char *const color_names[] = { -"bikeshed", -"RaNdOm", -"foo", -"red", -"Red ", -"RED", -"Violet", -"Yellow", -"Red", -"0x00", -"0x000", -"0xff00", -"0x3e34ff", -"0x3e34ffaa", -"0xffXXee", -"0xfoobar", -"0x", -"#ff", -"#ffXX00", -"ff", -"ffXX00", -"red@foo", -"random@10", -"0xff@1.0", -"red@", -"red@0xfff", -"red@0xf", -"red@2", -"red@0.1", -"red@-1", -"red@0.5", -"red@1.0", -"red@256", -"red@10foo", -"red@-1.0", -"red@-0.0", -}; - -av_log_set_level(AV_LOG_DEBUG); - -for (i = 0; i < FF_ARRAY_ELEMS(color_names); i++) { -if (av_parse_color(rgba, color_names[i], -1, NULL) >= 0) -printf("%s -> R(%d) G(%d) B(%d) A(%d)\n", - color_names[i], rgba[0], rgba[1], rgba[2], rgba[3]); -else -printf("%s -> error\n", color_names[i]); -} +static void test_av_parse_color(void) +{ +int i; +uint8_t rgba[4]; +static const char *const color_names[] = { +"bikeshed", +"RaNdOm", +"foo", +"red", +"Red ", +"RED", +"Violet", +"Yellow", +"Red", +"0x00", +"0x000", +"0xff00", +"0x3e34ff", +"0x3e34ffaa", +"0xffXXee", +"0xfoobar", +"0x", +"#ff", +"#ffXX00", +"ff", +"ffXX00", +"red@foo", +"random@10", +"0xff@1.0", +"red@", +"red@0xfff", +"red@0xf", +"red@2", +"red@0.1", +"red@-1", +"red@0.5", +"red@1.0", +"red@256", +"red@10foo", +"red@-1.0", +"red@-0.0", +}; + +av_log_set_level(AV_LOG_DEBUG); + +for (i = 0; i < FF_ARRAY_ELEMS(color_names); i++) { +if (av_parse_color(rgba, color_names[i], -1, NULL) >= 0) +printf("%s -> R(%d) G(%d) B(%d) A(%d)\n", + co
Re: [FFmpeg-devel] [PATCH v2 2/8] lavc: factor apply_param_change() AV_EF_EXPLODE handling
On Wed, Mar 23, 2016 at 02:02:09PM +0100, wm4 wrote: > Remove the duplicated code for handling failure of apply_param_change(). > --- > libavcodec/utils.c | 34 +++--- > 1 file changed, 19 insertions(+), 15 deletions(-) still LGTM thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The misfortune of the wise is better than the prosperity of the fool. -- Epicurus signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 2/2] Added more tests to libavutil/parseutils.c
- Added tests for av_find_info_tag(). - Added test for av_get_known_color_name() --- libavutil/parseutils.c| 37 tests/ref/fate/parseutils | 151 ++ 2 files changed, 188 insertions(+) diff --git a/libavutil/parseutils.c b/libavutil/parseutils.c index 43bd4eb..a782ef6 100644 --- a/libavutil/parseutils.c +++ b/libavutil/parseutils.c @@ -922,6 +922,38 @@ static void test_av_parse_time(void) } } +static void test_av_get_known_color_name(void) +{ +int i; +const uint8_t *rgba; +const char *color; + +for (i = 0; i < FF_ARRAY_ELEMS(color_table); ++i) { +color = av_get_known_color_name(i, &rgba); +if (color) { +printf("%s -> R(%d) G(%d) B(%d) A(%d)\n", +color, rgba[0], rgba[1], rgba[2], rgba[3]); +} +else +printf("Color ID: %d not found\n", i); +} +} + +static void test_av_find_info_tag(void) +{ +char args[] = "?tag1=val1&tag2=val2&tag3=val3&tag41=value 41&tag42=random1"; +const char *tags[] = {"tag1", "tag2", "tag3", "tag4", "tag41", "41", "random1"}; +char buff[16]; +int i; + +for (i = 0; i < FF_ARRAY_ELEMS(tags); ++i) { +if (av_find_info_tag(buff, sizeof(buff), tags[i], args)) +printf("%d. %s found: %s\n", i, tags[i], buff); +else +printf("%d. %s not found\n", i, tags[i]); +} +} + int main(void) { printf("Testing av_parse_video_rate()\n"); @@ -936,6 +968,11 @@ int main(void) printf("\nTesting av_parse_time()\n"); test_av_parse_time(); +printf("\nTesting av_get_known_color_name()\n"); +test_av_get_known_color_name(); + +printf("\nTesting av_find_info_tag()\n"); +test_av_find_info_tag(); return 0; } diff --git a/tests/ref/fate/parseutils b/tests/ref/fate/parseutils index 3306229..1aad5ec 100644 --- a/tests/ref/fate/parseutils +++ b/tests/ref/fate/parseutils @@ -83,3 +83,154 @@ now -> 1331972053.20 = 2012-03-17T08:14:13Z 42.1729 -> +42172900 -1729.42 -> -172942 12:34->+75400 + +Testing av_get_known_color_name() +AliceBlue -> R(240) G(248) B(255) A(0) +AntiqueWhite -> R(250) G(235) B(215) A(0) +Aqua -> R(0) G(255) B(255) A(0) +Aquamarine -> R(127) G(255) B(212) A(0) +Azure -> R(240) G(255) B(255) A(0) +Beige -> R(245) G(245) B(220) A(0) +Bisque -> R(255) G(228) B(196) A(0) +Black -> R(0) G(0) B(0) A(0) +BlanchedAlmond -> R(255) G(235) B(205) A(0) +Blue -> R(0) G(0) B(255) A(0) +BlueViolet -> R(138) G(43) B(226) A(0) +Brown -> R(165) G(42) B(42) A(0) +BurlyWood -> R(222) G(184) B(135) A(0) +CadetBlue -> R(95) G(158) B(160) A(0) +Chartreuse -> R(127) G(255) B(0) A(0) +Chocolate -> R(210) G(105) B(30) A(0) +Coral -> R(255) G(127) B(80) A(0) +CornflowerBlue -> R(100) G(149) B(237) A(0) +Cornsilk -> R(255) G(248) B(220) A(0) +Crimson -> R(220) G(20) B(60) A(0) +Cyan -> R(0) G(255) B(255) A(0) +DarkBlue -> R(0) G(0) B(139) A(0) +DarkCyan -> R(0) G(139) B(139) A(0) +DarkGoldenRod -> R(184) G(134) B(11) A(0) +DarkGray -> R(169) G(169) B(169) A(0) +DarkGreen -> R(0) G(100) B(0) A(0) +DarkKhaki -> R(189) G(183) B(107) A(0) +DarkMagenta -> R(139) G(0) B(139) A(0) +DarkOliveGreen -> R(85) G(107) B(47) A(0) +Darkorange -> R(255) G(140) B(0) A(0) +DarkOrchid -> R(153) G(50) B(204) A(0) +DarkRed -> R(139) G(0) B(0) A(0) +DarkSalmon -> R(233) G(150) B(122) A(0) +DarkSeaGreen -> R(143) G(188) B(143) A(0) +DarkSlateBlue -> R(72) G(61) B(139) A(0) +DarkSlateGray -> R(47) G(79) B(79) A(0) +DarkTurquoise -> R(0) G(206) B(209) A(0) +DarkViolet -> R(148) G(0) B(211) A(0) +DeepPink -> R(255) G(20) B(147) A(0) +DeepSkyBlue -> R(0) G(191) B(255) A(0) +DimGray -> R(105) G(105) B(105) A(0) +DodgerBlue -> R(30) G(144) B(255) A(0) +FireBrick -> R(178) G(34) B(34) A(0) +FloralWhite -> R(255) G(250) B(240) A(0) +ForestGreen -> R(34) G(139) B(34) A(0) +Fuchsia -> R(255) G(0) B(255) A(0) +Gainsboro -> R(220) G(220) B(220) A(0) +GhostWhite -> R(248) G(248) B(255) A(0) +Gold -> R(255) G(215) B(0) A(0) +GoldenRod -> R(218) G(165) B(32) A(0) +Gray -> R(128) G(128) B(128) A(0) +Green -> R(0) G(128) B(0) A(0) +GreenYellow -> R(173) G(255) B(47) A(0) +HoneyDew -> R(240) G(255) B(240) A(0) +HotPink -> R(255) G(105) B(180) A(0) +IndianRed -> R(205) G(92) B(92) A(0) +Indigo -> R(75) G(0) B(130) A(0) +Ivory -> R(255) G(255) B(240) A(0) +Khaki -> R(240) G(230) B(140) A(0) +Lavender -> R(230) G(230) B(250) A(0) +LavenderBlush -> R(255) G(240) B(245) A(0) +LawnGreen -> R(124) G(252) B(0) A(0) +LemonChiffon -> R(255) G(250) B(205) A(0) +LightBlue -> R(173) G(216) B(230) A(0) +LightCoral -> R(240) G(128) B(128) A(0) +LightCyan -> R(224) G(255) B(255) A(0) +LightGoldenRodYellow -> R(250) G(250) B(210) A(0) +LightGreen -> R(144) G(238) B(144) A(0) +LightGrey -> R(211) G(211) B(211) A(0) +LightPink -> R(255) G(182) B(193) A(0) +LightSalmon -> R(255) G(160) B(122) A(0) +
Re: [FFmpeg-devel] [PATCH] added support for hardware assist H264 video encoding for the Raspberry Pi
Is there a pretty formatter for ffmpeg’s coding style and coding style guideline? Amancio > On Mar 23, 2016, at 8:17 AM, Michael Niedermayer > wrote: > > On Tue, Mar 22, 2016 at 06:40:27PM -0700, Amancio Hasty wrote: > [...] > >> +static int vc264_init(AVCodecContext *avctx) { >> + >> + >> + >> + OMX_ERRORTYPE r; >> + int error; >> + >> + >> + >> + VC264Context *vc = avctx->priv_data; >> + >> + vc->width = avctx->width; >> + vc->height = avctx->height; >> + vc->bit_rate = avctx->bit_rate; >> +#if FF_API_CODED_FRAME >> +FF_DISABLE_DEPRECATION_WARNINGS >> + >> + avctx->coded_frame = av_frame_alloc(); >> + avctx->coded_frame->pict_type = AV_PICTURE_TYPE_I; >> +FF_ENABLE_DEPRECATION_WARNINGS >> +#endif >> + >> + >> + memset(&vc->list, 0, sizeof(vc->list)); >> + bcm_host_init(); >> + if ((vc->client = ilclient_init()) == NULL) { >> + return -3; >> + } >> + error = OMX_Init(); >> + >> + if (error != OMX_ErrorNone) { >> + ilclient_destroy(vc->client); >> + av_log(avctx,AV_LOG_ERROR,"in vc264_init OMX_Init failed "); >> + return -4; >> +} >> + >> + // create video_encode >> + r = ilclient_create_component(vc->client, &vc->video_encode, (char *) >> "video_encode", >> + ILCLIENT_DISABLE_ALL_PORTS | >> + ILCLIENT_ENABLE_INPUT_BUFFERS | >> + ILCLIENT_ENABLE_OUTPUT_BUFFERS); >> + >> + if (r != 0) { >> +av_log(avctx,AV_LOG_ERROR,"ilclient_create_component() for >> video_encode failed with %x!", >> + r); > >> + exit(1); > > a library cannot call exit() > > also indention looks rather odd and random > > [...] > > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > The educated differ from the uneducated as much as the living from the > dead. -- Aristotle ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] added support for hardware assist H264 video encoding for the Raspberry Pi
Hi, On Wed, Mar 23, 2016 at 11:41 AM, Amancio Hasty wrote: > Is there a pretty formatter for ffmpeg’s coding style and > coding style guideline? https://ffmpeg.org/developer.html#Coding-Rules-1 Please don't top-post. Ronald ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH]Addition of MLP encoder
This is the modified diff with few changes in mlp_encode_frame function. -Disha On Tue, Mar 22, 2016 at 1:59 AM, Disha Singh wrote: > Qualification task for project TrueHD encoder. > There are two patches. > One has the changes made to other files to support mlpenc.c and the other > only has mlpenc.c. > Also attached is the diff file of mlpenc.c. > (Modified the file : > https://github.com/ramiropolla/soc/blob/master/mlp/mlpenc.c) > diff --git a/mlpenc.c b/ffmpeg_sources/ffmpeg/libavcodec/mlpenc.c old mode 100644 new mode 100755 index 70cb7d8..7035ad9 --- a/mlpenc.c +++ b/ffmpeg_sources/ffmpeg/libavcodec/mlpenc.c @@ -24,9 +24,15 @@ #include "libavutil/crc.h" #include "libavutil/avstring.h" #include "mlp.h" -#include "dsputil.h" #include "lpc.h" +#include "libavutil/internal.h" +#include "libavutil/intreadwrite.h" +#include "libavutil/channel_layout.h" +#include "internal.h" +#include "mlpdsp.h" +#include "get_bits.h" + #define MAJOR_HEADER_INTERVAL 16 #define MLP_MIN_LPC_ORDER 1 @@ -97,6 +103,7 @@ typedef struct BestOffset { #define NUM_CODEBOOKS 4 typedef struct { +const AVClass *class; AVCodecContext *avctx; int num_substreams; ///< Number of substreams contained within this stream. @@ -187,8 +194,9 @@ typedef struct { DecodingParams *seq_decoding_params; unsigned intmax_codebook_search; + -DSPContext dsp; +MLPDSPContext mlp_dsp; } MLPEncodeContext; static ChannelParams restart_channel_params[MAX_CHANNELS]; @@ -368,10 +376,10 @@ static void copy_matrix_params(MatrixParams *dst, MatrixParams *src) static void copy_restart_frame_params(MLPEncodeContext *ctx, unsigned int substr) { -ChannelParams (*seq_cp)[ctx->avctx->channels] = (ChannelParams (*)[ctx->avctx->channels]) ctx->seq_channel_params; -DecodingParams (*seq_dp)[ctx->num_substreams] = (DecodingParams (*)[ctx->num_substreams]) ctx->seq_decoding_params; unsigned int index; - +ChannelParams **seq_cp = &ctx->seq_channel_params; +DecodingParams **seq_dp = &ctx->seq_decoding_params; + for (index = 0; index < ctx->number_of_subblocks; index++) { DecodingParams *dp = &seq_dp[index][substr]; unsigned int channel; @@ -549,12 +557,12 @@ static av_cold int mlp_encode_init(AVCodecContext *avctx) } switch (avctx->sample_fmt) { -case SAMPLE_FMT_S16: +case AV_SAMPLE_FMT_S16: ctx->coded_sample_fmt[0] = BITS_16; ctx->wordlength = 16; break; /* TODO 20 bits: */ -case SAMPLE_FMT_S32: +case AV_SAMPLE_FMT_S32: ctx->coded_sample_fmt[0] = BITS_24; ctx->wordlength = 24; break; @@ -565,7 +573,7 @@ static av_cold int mlp_encode_init(AVCodecContext *avctx) } ctx->coded_sample_fmt[1] = -1 & 0xf; -avctx->coded_frame = avcodec_alloc_frame(); +avctx->coded_frame = av_frame_alloc(); ctx->dts = -avctx->frame_size; @@ -622,8 +630,8 @@ static av_cold int mlp_encode_init(AVCodecContext *avctx) ctx->channel_arrangement = avctx->channels - 1; ctx->num_substreams = 1; ctx->flags = FLAGS_DVDA; -ctx->channel_occupancy = ff_mlp_ch_info[avctx->channels - 1].channel_occupancy; -ctx->summary_info = ff_mlp_ch_info[avctx->channels - 1].summary_info ; +//ctx->channel_occupancy = ff_mlp_ch_info[avctx->channels - 1].channel_occupancy; +//ctx->summary_info = ff_mlp_ch_info[avctx->channels - 1].summary_info ; size = sizeof(unsigned int) * ctx->max_restart_interval; @@ -680,7 +688,7 @@ static av_cold int mlp_encode_init(AVCodecContext *avctx) clear_channel_params(ctx, restart_channel_params); clear_decoding_params(ctx, restart_decoding_params); -dsputil_init(&ctx->dsp, avctx); +ff_mlpdsp_init(&ctx->mlp_dsp); return 0; } @@ -801,11 +809,12 @@ static unsigned int bitcount_decoding_params(MLPEncodeContext *ctx, return bitcount; } + / ** Functions that write to the bitstream *** / -/** Writes a major sync header to the bitstream. */ +/* Writes a major sync header to the bitstream. */ static void write_major_sync(MLPEncodeContext *ctx, uint8_t *buf, int buf_size) { PutBitContext pb; @@ -835,14 +844,14 @@ static void write_major_sync(MLPEncodeContext *ctx, uint8_t *buf, int buf_size) put_bits(&pb, 8, ctx->substream_info ); put_bits(&pb, 5, ctx->fs ); put_bits(&pb, 5, ctx->wordlength ); -put_bits(&pb, 6, ctx->channel_occupancy ); +//put_bits(&pb, 6, ctx->channel_occupancy ); put_bits(&pb, 3, 0); /* ignored */ put_bits(&pb, 10, 0); /* speaker_layout */ put_bits(&pb
[FFmpeg-devel] GSoC again
Good day, my previous attempts for GSoC aren't very successful. I talked with Carl Eugen about GSoC. He advise me to choose a project with clear aims that fits clearly into the scope of the framework. Such a thing would be MPEG DASH demuxing [1] , so I want to bring it up as suggestion. DASH is a streaming format for adaptively choose various bitrates and formats. The content is therefore splitted in segments, that could be switched. All relevant data are found in a XML-based MPD file (sample [2]). Currently FFmpeg supports DASH muxing and in the mov demuxer was some work for fragments (correct me if I'm wrong). Beside that, FFmpeg could handle HLS, which is a similar m3u based format. My work in the GSoC could be: 1. Implement a robust (to errors) generic XML-Parser. 2. Implement a MPD demuxer, that could handle the parsing and the segments. 3. Implement adaptive switching. A note to the 3rd point. I'm absolutely not sure, what amount of work this is (maybe you could comment it) and what the best place would be to implement this (including whether this whole feature is meaningful). I would say, this could be either in the demuxer itself or in ffplay (can decide a muxer such a thing, like bandwidth?). Personal benefits: I hope to learn something about (beside DASH as format): - the protocol handling and the connection between demuxer and protocol - ffplay - obviously the demuxer functions My qualification task could be: 1. Write a simple demuxer (as a draft!), that could handle the mpd file [2], so that the first stream is recognized. The orientation points I would work with are the already existing hls demuxer, the specification [3] and maybe libdash [4] (C++ library). The biggest problem with this task is a missing mentor :). Maybe someone is excited about this feature and willing to support me. kind regards, Gerion [1] https://trac.ffmpeg.org/ticket/5269 [2] http://dash.edgesuite.net/dash264/TestCases/1a/netflix/exMPD_BIP_TC1.mpd [3] http://standards.iso.org/ittf/PubliclyAvailableStandards/c065274_ISO_IEC_23009-1_2014.zip [4] https://github.com/bitmovin/libdash ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Implement hdcd filtering
Benjamin St gmail.com> writes: > This patch applies filtering/decoding for hdcds(see ticket #4441) . > The filter is heavily based on > https://github.com/kode54/foo_hdcd/. (Is this ok? It seems to me as if the most simple solution is not to change the license but add the license to our used licenses. > Copyright?) If you added something important, you should of course add your copyright. Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > It's not practical to keep this with the new decode API. > --- > ffmpeg.c | 7 --- > ffmpeg.h | 1 - > 2 files changed, 8 deletions(-) its not practical in ffmpeg.c but libavcodec should be able to easily check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES doesnt decode "subframes" Can you move this check into libavcodec ? i think otherwise nothing would be checking for missing AV_CODEC_CAP_SUBFRAMES anymore [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Good people do not need laws to tell them to act responsibly, while bad people will find a way around the laws. -- Plato signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, 23 Mar 2016 17:51:11 +0100 Michael Niedermayer wrote: > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > It's not practical to keep this with the new decode API. > > --- > > ffmpeg.c | 7 --- > > ffmpeg.h | 1 - > > 2 files changed, 8 deletions(-) > > its not practical in ffmpeg.c but libavcodec should be able to easily > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > doesnt decode "subframes" > Can you move this check into libavcodec ? > i think otherwise nothing would be checking for missing > AV_CODEC_CAP_SUBFRAMES anymore > What's the point of this check? ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavc/mediacodec: add hwaccel support
On Tue, Mar 22, 2016 at 10:04 AM, Matthieu Bouron wrote: > > > On Fri, Mar 18, 2016 at 5:50 PM, Matthieu Bouron < > matthieu.bou...@gmail.com> wrote: > >> From: Matthieu Bouron >> >> --- >> >> Hello, >> >> The following patch add hwaccel support to the mediacodec (h264) decoder >> by allowing >> the user to render the output frames directly on a surface. >> >> In order to do so the user needs to initialize the hwaccel through the >> use of >> av_mediacodec_alloc_context and av_mediacodec_default_init functions. The >> later >> takes a reference to an android/view/Surface as parameter. >> >> If the hwaccel successfully initialize, the decoder output frames pix fmt >> will be >> AV_PIX_FMT_MEDIACODEC. The following snippet of code demonstrate how to >> render >> the frames on the surface: >> >> AVMediaCodecBuffer *buffer = (AVMediaCodecBuffer *)frame->data[3]; >> av_mediacodec_release_buffer(buffer, 1); >> >> The last argument of av_mediacodec_release_buffer enable rendering of the >> buffer on the surface (or not if set to 0). >> >> Regarding the internal changes in the mediacodec decoder: >> >> MediaCodec.flush() discards both input and output buffers meaning that if >> MediaCodec.flush() is called all output buffers the user has a reference >> on are >> now invalid (and cannot be used). >> This behaviour does not fit well in the avcodec API. >> >> When the decoder is configured to output software buffers, there is no >> issue as >> the buffers are copied. >> >> Now when the decoder is configured to output to a surface, the user might >> not >> want to render all the frames as fast as the decoder can go and might >> want to >> control *when* the frame are rendered, so we need to make sure that the >> MediaCodec.flush() call is delayed until all the frames the user retains >> has >> been released or rendered. >> >> Delaying the call to MediaCodec.flush() means buffering any inputs that >> come >> the decoder until the user has released/renderer the frame he retains. >> >> This is a limitation of this hwaccel implementation, if the user retains a >> frame (a), then issue a flush command to the decoder, the packets he >> feeds to >> the decoder at that point will be queued in the internal decoder packet >> queue >> (until he releases the frame (a)). This scenario leads to a memory usage >> increase to say the least. >> >> Currently there is no limitation on the size of the internal decoder >> packet >> queue but this is something that can be added easily. Then, if the queue >> is >> full, what would be the behaviour of the decoder ? Can it block ? Or >> should it >> returns something like AVERROR(EAGAIN) ? >> >> About the other internal decoder changes I introduced: >> >> The MediaCodecDecContext is now refcounted (using the lavu/atomic api) >> since >> the (hwaccel) frames can be retained by the user, we need to delay the >> destruction of the codec until the user has released all the frames he >> has a >> reference on. >> The reference counter of the MediaCodecDecContext is incremented each >> time an >> (hwaccel) frame is outputted by the decoder and decremented each time a >> (hwaccel) frame is released. >> >> Also, when the decoder is configured to output to a surface the pts that >> are >> given to the MediaCodec API are now rescaled based on the codec_timebase >> as >> those timestamps values are propagated to the frames rendered on the >> surface >> since Android M. Not sure if it's really useful though. >> >> On the performance side: >> >> On a nexus 5, decoding an h264 stream (main profile) 1080p@60fps: >> - software output + rgba conversion goes at 59~60fps >> - surface output + render on a surface goes at 100~110fps >> >> > [...] > > Patch updated with the following differences: > * the public mediacodec api is now always built (not only when > mediacodec is available) (and the build when mediacodec is not available > has been fixed) > * the documentation of av_mediacodec_release_buffer has been improved a > bit > Patch updated with the following differences: MediaCodecBuffer->released type is now a volatile int (instead of a int*) MediaCodecContext->refcount type is now a volatile int (instead of a int*) Matthieu [...] From fdbc9e38816be8ce3af2d4a85383203588f1dd7a Mon Sep 17 00:00:00 2001 From: Matthieu Bouron Date: Fri, 11 Mar 2016 17:21:04 +0100 Subject: [PATCH] lavc: add mediacodec hwaccel support --- configure | 1 + libavcodec/Makefile | 6 +- libavcodec/allcodecs.c | 1 + libavcodec/mediacodec.c | 133 + libavcodec/mediacodec.h | 88 ++ libavcodec/mediacodec_surface.c | 66 +++ libavcodec/mediacodec_surface.h | 31 + libavcodec/mediacodec_wrapper.c | 5 +- libavcodec/mediacodecdec.c | 255 libavcodec/mediacodecdec.h | 17 +++ libavcodec/mediacodecdec_h264.c | 23 libavutil/pixdesc.c
Re: [FFmpeg-devel] [PATCH v2 5/8] ffmpeg: use new decode API
On Wed, Mar 23, 2016 at 02:02:12PM +0100, wm4 wrote: > This is a bit messy, mainly due to timestamp handling. > > decode_video() relied on the fact that it could set dts on a flush/drain > packet. This is not possible with the old API, and won't be. (I think > doing this was very questionable with the old API. Flush packets should > not contain any information; they just cause a FIFO to be emptied.) This > is replaced with checking the best_effort_timestamp for AV_NOPTS_VALUE, > and using the suggested DTS in the drain case. > > The fate-cavs test still fails due to dropping the last frame. This > happens because the timestamp of the last frame goes backwards > (ffprobe -show_frames shows the same thing). I suspect that this > "worked" due to the best effort timestamp logic picking the DTS > over the decreasing PTS. Since this logic is in libavcodec (where > it probably shouldn't be), this can't be easily fixed. The timestamps > of the cavs samples are weird anyway, so I chose not to fix it. > > Another strange thing is the timestamp handling in the video path of > process_input_packet (after the decode_video() call). It looks like > the code to increase next_dts and next_pts should be run every time > a frame is decoded - but it's needed even if output is skipped. > --- > ffmpeg.c| 126 > +++- > tests/ref/fate/cavs | 1 - > 2 files changed, 75 insertions(+), 52 deletions(-) this does not pass fate make -j12 fate -k THREAD_TYPE=frame THREADS=5 passes before this commit but fails afterwards make: *** [fate-filter-hq2x] Error 1 make: *** [fate-filter-3xbr] Error 1 make: *** [fate-filter-hq4x] Error 1 make: *** [fate-filter-4xbr] Error 1 make: *** [fate-filter-2xbr] Error 1 make: *** [fate-filter-hq3x] Error 1 make: *** [fate-h264-reinit-large_420_8-to-small_420_8] Error 1 make: *** [fate-h264-reinit-small_420_9-to-small_420_8] Error 1 make: *** [fate-h264-reinit-small_420_8-to-large_444_10] Error 1 make: *** [fate-h264-reinit-small_422_9-to-small_420_9] Error 1 make: *** [fate-hevc-paramchange-yuv420p-yuv420p10] Error 1 [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The real ebay dictionary, page 2 "100% positive feedback" - "All either got their money back or didnt complain" "Best seller ever, very honest" - "Seller refunded buyer after failed scam" signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, Mar 23, 2016 at 06:06:37PM +0100, wm4 wrote: > On Wed, 23 Mar 2016 17:51:11 +0100 > Michael Niedermayer wrote: > > > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > > It's not practical to keep this with the new decode API. > > > --- > > > ffmpeg.c | 7 --- > > > ffmpeg.h | 1 - > > > 2 files changed, 8 deletions(-) > > > > its not practical in ffmpeg.c but libavcodec should be able to easily > > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > > doesnt decode "subframes" > > Can you move this check into libavcodec ? > > i think otherwise nothing would be checking for missing > > AV_CODEC_CAP_SUBFRAMES anymore > > > > What's the point of this check? to keep track of / detect the cases that put multiple decodable frames in a packet. Whats the point of that? there where several IIRC one is that when too many frames are put in a packet latency increases, another is that seeking granularity is worse (if its not even one packet for the whole file ...) also when stream copying most muxers do expect 1 frame per packet so that would generate invalid files AV_CODEC_CAP_SUBFRAMES kind of says, "i know it has multiple frames per packet and thats ok or its just nt practical to do anything about" [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Rewriting code that is poorly written but fully understood is good. Rewriting code that one doesnt understand is a sign that one is less smart then the original author, trying to rewrite it will not make it better. signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] added support for hardware assist H264 video encoding for the Raspberry Pi
> On Mar 23, 2016, at 8:17 AM, Michael Niedermayer > wrote: > > On Tue, Mar 22, 2016 at 06:40:27PM -0700, Amancio Hasty wrote: > [...] > >> +static int vc264_init(AVCodecContext *avctx) { >> + >> + >> + >> + OMX_ERRORTYPE r; >> + int error; >> + >> + >> + >> + VC264Context *vc = avctx->priv_data; >> + >> + vc->width = avctx->width; >> + vc->height = avctx->height; >> + vc->bit_rate = avctx->bit_rate; >> +#if FF_API_CODED_FRAME >> +FF_DISABLE_DEPRECATION_WARNINGS >> + >> + avctx->coded_frame = av_frame_alloc(); >> + avctx->coded_frame->pict_type = AV_PICTURE_TYPE_I; >> +FF_ENABLE_DEPRECATION_WARNINGS >> +#endif >> + >> + >> + memset(&vc->list, 0, sizeof(vc->list)); >> + bcm_host_init(); >> + if ((vc->client = ilclient_init()) == NULL) { >> + return -3; >> + } >> + error = OMX_Init(); >> + >> + if (error != OMX_ErrorNone) { >> + ilclient_destroy(vc->client); >> + av_log(avctx,AV_LOG_ERROR,"in vc264_init OMX_Init failed "); >> + return -4; >> +} >> + >> + // create video_encode >> + r = ilclient_create_component(vc->client, &vc->video_encode, (char *) >> "video_encode", >> + ILCLIENT_DISABLE_ALL_PORTS | >> + ILCLIENT_ENABLE_INPUT_BUFFERS | >> + ILCLIENT_ENABLE_OUTPUT_BUFFERS); >> + >> + if (r != 0) { >> +av_log(avctx,AV_LOG_ERROR,"ilclient_create_component() for >> video_encode failed with %x!", >> + r); > >> + exit(1); > > a library cannot call exit() > > also indention looks rather odd and random > > [...] > > -- > Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB > > The educated differ from the uneducated as much as the living from the > dead. -- Aristotle Fixed the indentation and style by running : indent -i4 -kr -nut vc264.c fixed the error return values and the exit calls. Amancio b-0001-added-support-for-hardware-assist-H264-video-encodin.patch Description: Binary data ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] FFmpeg code Attribution
Hello Again, I took a look at the FFmpeg j2k code. Now, I've worked with OpenJPEG for many years, and I would say at least 20% of the code in FFmpeg was either directly copied from OpenJPEG, or is very similar to OpenJPEG code. I think the people who did the work on the FFmpeg codec would readily admit that they copied a certain amount directly from the other project. So, I think that the OpenJPEG BSD license should appear on those files with copied code from OpenJPEG, to comply with the BSD license. I can list some of the files (there aren't many) if people are interested. Kind Regards, Aaron ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, 23 Mar 2016 18:37:25 +0100 Michael Niedermayer wrote: > On Wed, Mar 23, 2016 at 06:06:37PM +0100, wm4 wrote: > > On Wed, 23 Mar 2016 17:51:11 +0100 > > Michael Niedermayer wrote: > > > > > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > > > It's not practical to keep this with the new decode API. > > > > --- > > > > ffmpeg.c | 7 --- > > > > ffmpeg.h | 1 - > > > > 2 files changed, 8 deletions(-) > > > > > > its not practical in ffmpeg.c but libavcodec should be able to easily > > > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > > > doesnt decode "subframes" > > > Can you move this check into libavcodec ? > > > i think otherwise nothing would be checking for missing > > > AV_CODEC_CAP_SUBFRAMES anymore > > > > > > > What's the point of this check? > > to keep track of / detect the cases that put multiple decodable frames > in a packet. > > Whats the point of that? > there where several IIRC > one is that when too many frames are put in a packet > latency increases, another is that seeking granularity is worse > (if its not even one packet for the whole file ...) It's true that too many frames in a packet isn't ideal, but that's not what the code checks. It checks if an audio decoder not marked with AV_CODEC_CAP_SUBFRAMES consumes partial packets. That might be useful as debug check in libavcodec or so, or by properly reviewing patches for new decoders. > also when stream copying most muxers do expect 1 frame per packet > so that would generate invalid files > > AV_CODEC_CAP_SUBFRAMES kind of says, "i know it has multiple frames > per packet and thats ok or its just nt practical to do anything about" > > [...] ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 12:24 PM, Aaron Boxer wrote: > Hello Again, > > I took a look at the FFmpeg j2k code. Now, I've worked with OpenJPEG for > many years, and I would say at least 20% of the code in FFmpeg was either > directly copied from OpenJPEG, or is very similar to OpenJPEG code. > > I think the people who did the work on the FFmpeg codec would readily admit > that they copied a certain amount directly from the other project. > > So, I think that the OpenJPEG BSD license should appear on those files with > copied code from OpenJPEG, to comply with the BSD license. I can list some > of the files (there aren't many) if people are interested. Go ahead and list the files and sections of code you're concerned about (or the commits that introduced the code). Chances are that's going to come up in the thread anyway, so might as well do it from the get-go. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, Mar 23, 2016 at 08:44:41PM +0100, wm4 wrote: > On Wed, 23 Mar 2016 18:37:25 +0100 > Michael Niedermayer wrote: > > > On Wed, Mar 23, 2016 at 06:06:37PM +0100, wm4 wrote: > > > On Wed, 23 Mar 2016 17:51:11 +0100 > > > Michael Niedermayer wrote: > > > > > > > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > > > > It's not practical to keep this with the new decode API. > > > > > --- > > > > > ffmpeg.c | 7 --- > > > > > ffmpeg.h | 1 - > > > > > 2 files changed, 8 deletions(-) > > > > > > > > its not practical in ffmpeg.c but libavcodec should be able to easily > > > > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > > > > doesnt decode "subframes" > > > > Can you move this check into libavcodec ? > > > > i think otherwise nothing would be checking for missing > > > > AV_CODEC_CAP_SUBFRAMES anymore > > > > > > > > > > What's the point of this check? > > > > to keep track of / detect the cases that put multiple decodable frames > > in a packet. > > > > Whats the point of that? > > there where several IIRC > > one is that when too many frames are put in a packet > > latency increases, another is that seeking granularity is worse > > (if its not even one packet for the whole file ...) > > It's true that too many frames in a packet isn't ideal, but that's not > what the code checks. > > It checks if an audio decoder not marked with AV_CODEC_CAP_SUBFRAMES > consumes partial packets. yes, but a check that checks "if a decoder not marked with AV_CODEC_CAP_SUBFRAMES consumes partial packets". Is a simple and zero overhead way of detecting some (not all) cases where there are multiple frames in a packet. One cant look at a sequence of bytes that could be any arbitrary format/codec and say "thats more than 1 frame" it requires codec specific code, the decoders already do what is needed for some cases, for the others there is (please correct me if iam wrong which might be) no easy way except maybe running the parser if one exists over it but that would not be zero overhead > That might be useful as debug check in > libavcodec or so, or by properly reviewing patches for new decoders. Iam not sure if i understand what you mean exactly but this somehow sounds like an implication that people would not review patches properly. Thats a serious accusation if thats what was meant. Either there is a problem then it should be pointed to very specifically so it can be solved or such implications shouldnt be made at all. also replacing automated tests by manual tests is not a good idea cpu time is a lot more available than man hours, and even where a human checks things, having the computer double check it even if only partial is a overal win, humans make mistakes and can miss/forget things even if they try their best with the resources available to them. [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I have often repented speaking, but never of holding my tongue. -- Xenocrates signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, 23 Mar 2016 21:36:35 +0100 Michael Niedermayer wrote: > On Wed, Mar 23, 2016 at 08:44:41PM +0100, wm4 wrote: > > On Wed, 23 Mar 2016 18:37:25 +0100 > > Michael Niedermayer wrote: > > > > > On Wed, Mar 23, 2016 at 06:06:37PM +0100, wm4 wrote: > > > > On Wed, 23 Mar 2016 17:51:11 +0100 > > > > Michael Niedermayer wrote: > > > > > > > > > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > > > > > It's not practical to keep this with the new decode API. > > > > > > --- > > > > > > ffmpeg.c | 7 --- > > > > > > ffmpeg.h | 1 - > > > > > > 2 files changed, 8 deletions(-) > > > > > > > > > > its not practical in ffmpeg.c but libavcodec should be able to easily > > > > > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > > > > > doesnt decode "subframes" > > > > > Can you move this check into libavcodec ? > > > > > i think otherwise nothing would be checking for missing > > > > > AV_CODEC_CAP_SUBFRAMES anymore > > > > > > > > > > > > > What's the point of this check? > > > > > > to keep track of / detect the cases that put multiple decodable frames > > > in a packet. > > > > > > Whats the point of that? > > > there where several IIRC > > > one is that when too many frames are put in a packet > > > latency increases, another is that seeking granularity is worse > > > (if its not even one packet for the whole file ...) > > > > It's true that too many frames in a packet isn't ideal, but that's not > > what the code checks. > > > > It checks if an audio decoder not marked with AV_CODEC_CAP_SUBFRAMES > > consumes partial packets. > > yes, but a check that checks "if a decoder not marked with > AV_CODEC_CAP_SUBFRAMES consumes partial packets". Is a simple and > zero overhead way of detecting some (not all) cases where there are > multiple frames in a packet. One cant look at a sequence of bytes > that could be any arbitrary format/codec and say > "thats more than 1 frame" it requires codec specific code, > the decoders already do what is needed for some cases, for the others > there is (please correct me if iam wrong which might be) no easy > way except maybe running the parser if one exists over it but that > would not be zero overhead > > > > That might be useful as debug check in > > libavcodec or so, or by properly reviewing patches for new decoders. > > Iam not sure if i understand what you mean exactly but this somehow > sounds like an implication that people would not review patches > properly. > Thats a serious accusation if thats what was meant. Either there is > a problem then it should be pointed to very specifically so it can be > solved or such implications shouldnt be made at all. Well, I'm not sure what else this check is useful for. A new audio decoder will need explicit code to handle multiframe audio by returning the exact number of bytes parsed, instead of e.g. "return avpkt->size;". So it should be pretty obvious whether a decoder does this? Anyway, I could move this check to avcodec_decode_audio4(), would that be ok? > > also replacing automated tests by manual tests is not a good idea It's not really a fully automated test, as FATE doesn't catch these cases at all. It just assumes that (1) the developer is using ffmpeg.c, and (2) actually sees the message. > cpu time is a lot more available than man hours, and even where > a human checks things, having the computer double check it even if > only partial is a overal win, humans make mistakes and can miss/forget > things even if they try their best with the resources available to > them. > > [...] ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] size=0, but av_malloc(1)
On Wed, Mar 23, 2016 at 03:31:38PM +0100, Michael Niedermayer wrote: > On Tue, Mar 22, 2016 at 11:43:50PM -0700, Chris Cunningham wrote: > > Hey Group, > > > > I'm seeing an interesting pattern [0][1] where we allocate 1 byte in places > > where I would expect no allocation to be necessary. Why is this being done? > > > > [0] https://github.com/FFmpeg/FFmpeg/blob/master/libavutil/mem.c#L136 > > [1] > > https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/oggparsevorbis.c#L286 > > to add another reason why malloc(0) _can_ be a problem > malloc(0) can return NULL or non NULL whchever way libc prefers > this makes reproducing bugreports harder if the developer and user > have differening libcs > also error checks become more complex if NULL can be a non error > return value Since you already said that: if that code used malloc(0) - note I don't know about av_malloc - and it returned 0, the behaviour would be incorrect. In this code, a NULL pointer means "no metadata update" which is very different from "metadata update to empty metadata". Also even if it does not return NULL it could always return the same pointer, which could trigger yet another class of bugs (probably not in this case though). ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 03:24:47PM -0400, Aaron Boxer wrote: > Hello Again, > > I took a look at the FFmpeg j2k code. Now, I've worked with OpenJPEG for > many years, and I would say at least 20% of the code in FFmpeg was either > directly copied from OpenJPEG, or is very similar to OpenJPEG code. Similarity is truly and utterly irrelevant, where it came from matters. > I think the people who did the work on the FFmpeg codec would readily admit > that they copied a certain amount directly from the other project. > > So, I think that the OpenJPEG BSD license should appear on those files with > copied code from OpenJPEG, to comply with the BSD license. I can list some > of the files (there aren't many) if people are interested. You should very much ask the authors. Adding copyright/license from someone who is not the author is wrong and highly objectionable (and at best marginally better than not attributing code copied), and similarity is not enough to conclusively show anything in most cases. It is not that rare that there are only a few ways to do something and it will of course look the same. If it is copied the authors will hopefully admit to it (and hopefully be more careful about attribution in the future). ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH v2 4/8] ffmpeg: remove sub-frame warning
On Wed, Mar 23, 2016 at 09:51:46PM +0100, wm4 wrote: > On Wed, 23 Mar 2016 21:36:35 +0100 > Michael Niedermayer wrote: > > > On Wed, Mar 23, 2016 at 08:44:41PM +0100, wm4 wrote: > > > On Wed, 23 Mar 2016 18:37:25 +0100 > > > Michael Niedermayer wrote: > > > > > > > On Wed, Mar 23, 2016 at 06:06:37PM +0100, wm4 wrote: > > > > > On Wed, 23 Mar 2016 17:51:11 +0100 > > > > > Michael Niedermayer wrote: > > > > > > > > > > > On Wed, Mar 23, 2016 at 02:02:11PM +0100, wm4 wrote: > > > > > > > It's not practical to keep this with the new decode API. > > > > > > > --- > > > > > > > ffmpeg.c | 7 --- > > > > > > > ffmpeg.h | 1 - > > > > > > > 2 files changed, 8 deletions(-) > > > > > > > > > > > > its not practical in ffmpeg.c but libavcodec should be able to > > > > > > easily > > > > > > check that a decoder which doesnt declare AV_CODEC_CAP_SUBFRAMES > > > > > > doesnt decode "subframes" > > > > > > Can you move this check into libavcodec ? > > > > > > i think otherwise nothing would be checking for missing > > > > > > AV_CODEC_CAP_SUBFRAMES anymore > > > > > > > > > > > > > > > > What's the point of this check? > > > > > > > > to keep track of / detect the cases that put multiple decodable frames > > > > in a packet. > > > > > > > > Whats the point of that? > > > > there where several IIRC > > > > one is that when too many frames are put in a packet > > > > latency increases, another is that seeking granularity is worse > > > > (if its not even one packet for the whole file ...) > > > > > > It's true that too many frames in a packet isn't ideal, but that's not > > > what the code checks. > > > > > > It checks if an audio decoder not marked with AV_CODEC_CAP_SUBFRAMES > > > consumes partial packets. > > > > yes, but a check that checks "if a decoder not marked with > > AV_CODEC_CAP_SUBFRAMES consumes partial packets". Is a simple and > > zero overhead way of detecting some (not all) cases where there are > > multiple frames in a packet. One cant look at a sequence of bytes > > that could be any arbitrary format/codec and say > > "thats more than 1 frame" it requires codec specific code, > > the decoders already do what is needed for some cases, for the others > > there is (please correct me if iam wrong which might be) no easy > > way except maybe running the parser if one exists over it but that > > would not be zero overhead > > > > > > > That might be useful as debug check in > > > libavcodec or so, or by properly reviewing patches for new decoders. > > > > Iam not sure if i understand what you mean exactly but this somehow > > sounds like an implication that people would not review patches > > properly. > > Thats a serious accusation if thats what was meant. Either there is > > a problem then it should be pointed to very specifically so it can be > > solved or such implications shouldnt be made at all. > > Well, I'm not sure what else this check is useful for. A new audio > decoder will need explicit code to handle multiframe audio by returning > the exact number of bytes parsed, instead of e.g. > "return avpkt->size;". So it should be pretty obvious whether a decoder > does this? yes, though there may be corner cases where it has to do that and it might be more robust. > > Anyway, I could move this check to avcodec_decode_audio4(), would that > be ok? yes and thanks! [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB I do not agree with what you have to say, but I'll defend to the death your right to say it. -- Voltaire signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
Reimar Döffinger gmx.de> writes: > If it is copied the authors will hopefully admit to it > (and hopefully be more careful about attribution in the > future). I of course agree with what you wrote but the original author of the jpeg 2000 codec was unreachable a few weeks after he had submitted the code and that was nearly ten years ago... Do we have the jpeg 2000 specification for comparison? Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] GSoC again
On 23.03.2016 17:09, Gerion Entrup wrote: > 3. Implement adaptive switching. > > A note to the 3rd point. I'm absolutely not sure, what amount of work this is > (maybe you could comment it) and what the best place would be to implement > this (including whether this whole feature is meaningful). I would say, this > could > be either in the demuxer itself or in ffplay (can decide a muxer such a thing, > like bandwidth?). I think it is not possible to be done inside demuxer. Different qualities may have different metadata such as width, height, maybe different codec. HLS is exposing all qualities as separate streams and player may choose the one it needs. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On 23 March 2016 at 21:22, Carl Eugen Hoyos wrote: > Reimar Döffinger gmx.de> writes: > > > If it is copied the authors will hopefully admit to it > > (and hopefully be more careful about attribution in the > > future). > > I of course agree with what you wrote but the original > author of the jpeg 2000 codec was unreachable a few > weeks after he had submitted the code and that was > nearly ten years ago... > > Do we have the jpeg 2000 specification for comparison? > I've uploaded all jpeg2000 specs I have as a .tar.gz archive here: https://0x0.st/KGO.gz 8 megabytes is a bit too big for the mailing list Link will probably be around for around a year IIRC most of the jpeg2000 code in FFmpeg was written during a summer of code in 2007 or so. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 3:59 PM, Michael Bradshaw wrote: > On Wed, Mar 23, 2016 at 12:24 PM, Aaron Boxer wrote: > > Hello Again, > > > > I took a look at the FFmpeg j2k code. Now, I've worked with OpenJPEG for > > many years, and I would say at least 20% of the code in FFmpeg was either > > directly copied from OpenJPEG, or is very similar to OpenJPEG code. > > > > I think the people who did the work on the FFmpeg codec would readily > admit > > that they copied a certain amount directly from the other project. > > > > So, I think that the OpenJPEG BSD license should appear on those files > with > > copied code from OpenJPEG, to comply with the BSD license. I can list > some > > of the files (there aren't many) if people are interested. > > Go ahead and list the files and sections of code you're concerned > about (or the commits that introduced the code). Chances are that's > going to come up in the thread anyway, so might as well do it from the > get-go. > Here is the most obvious example: j2kenc.c (from FFmpeg) static int getnmsedec_sig(int x, int bpno) { if (bpno > NMSEDEC_FRACBITS) return lut_nmsedec_sig[(x >> (bpno - NMSEDEC_FRACBITS)) & ((1 << NMSEDEC_BITS) - 1)]; return lut_nmsedec_sig0[x & ((1 << NMSEDEC_BITS) - 1)]; } static int getnmsedec_ref(int x, int bpno) { if (bpno > NMSEDEC_FRACBITS) return lut_nmsedec_ref[(x >> (bpno - NMSEDEC_FRACBITS)) & ((1 << NMSEDEC_BITS) - 1)]; return lut_nmsedec_ref0[x & ((1 << NMSEDEC_BITS) - 1)]; } // t1.c (from OpenJPEG) int16_t opj_t1_getnmsedec_sig(uint32_t x, uint32_t bitpos) { if (bitpos > 0) { return lut_nmsedec_sig[(x >> (bitpos)) & ((1 << T1_NMSEDEC_BITS) - 1)]; } return lut_nmsedec_sig0[x & ((1 << T1_NMSEDEC_BITS) - 1)]; } int16_t opj_t1_getnmsedec_ref(uint32_t x, uint32_t bitpos) { if (bitpos > 0) { return lut_nmsedec_ref[(x >> (bitpos)) & ((1 << T1_NMSEDEC_BITS) - 1)]; } return lut_nmsedec_ref0[x & ((1 << T1_NMSEDEC_BITS) - 1)]; } I will post more later. But, this example alone requires that the attribution to OpenJPEG be mentioned somewhere, as per the BSD license: * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright *notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright *notice, this list of conditions and the following disclaimer in the *documentation and/or other materials provided with the distribution. * What we seem to have is: an FFmpeg codec that was copied partially from OpenJPEG, but has fewer features than the original: encoding is poor, as a few have mentioned. Back to my original point, what is the reasoning not to just switch to OpenJPEG? Of course it is your absolute right to create your own, but it seems like a waste of precious open source resources. Yes, the same logic could be applied to my own project, but it is a little more complicated in my case. HTH, Aaron > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH]lavc/flicvideo: Implement padding in COPY chunks.
Hi! Attached patch fixes ticket #5364 for me. Please review, Carl Eugen diff --git a/libavcodec/flicvideo.c b/libavcodec/flicvideo.c index 3e0573a..7535a40 100644 --- a/libavcodec/flicvideo.c +++ b/libavcodec/flicvideo.c @@ -423,7 +423,7 @@ static int flic_decode_frame_8BPP(AVCodecContext *avctx, case FLI_COPY: /* copy the chunk (uncompressed frame) */ -if (chunk_size - 6 != s->avctx->width * s->avctx->height) { +if (chunk_size - 6 != FFALIGN(s->avctx->width, 4) * s->avctx->height) { av_log(avctx, AV_LOG_ERROR, "In chunk FLI_COPY : source data (%d bytes) " \ "has incorrect size, skipping chunk\n", chunk_size - 6); bytestream2_skip(&g2, chunk_size - 6); @@ -432,6 +432,8 @@ static int flic_decode_frame_8BPP(AVCodecContext *avctx, y_ptr += s->frame->linesize[0]) { bytestream2_get_buffer(&g2, &pixels[y_ptr], s->avctx->width); +if (s->avctx->width & 3) +bytestream2_skip(&g2, 4 - (s->avctx->width & 3)); } } break; @@ -711,7 +713,7 @@ static int flic_decode_frame_15_16BPP(AVCodecContext *avctx, case FLI_COPY: case FLI_DTA_COPY: /* copy the chunk (uncompressed frame) */ -if (chunk_size - 6 > (unsigned int)(s->avctx->width * s->avctx->height)*2) { +if (chunk_size - 6 > (unsigned int)(FFALIGN(s->avctx->width, 2) * s->avctx->height)*2) { av_log(avctx, AV_LOG_ERROR, "In chunk FLI_COPY : source data (%d bytes) " \ "bigger than image, skipping chunk\n", chunk_size - 6); bytestream2_skip(&g2, chunk_size - 6); @@ -727,6 +729,8 @@ static int flic_decode_frame_15_16BPP(AVCodecContext *avctx, pixel_ptr += 2; pixel_countdown--; } +if (s->avctx->width & 1) +bytestream2_skip(&g2, 2); } } break; ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 3/3] Tee muxer improvement (handling slave failure)
Adds per slave option 'onfail' to the tee muxer allowing an output to fail,so other slave outputs can continue. Signed-off-by: Jan Sebechlebsky --- doc/muxers.texi | 14 + libavformat/tee.c | 91 +-- 2 files changed, 96 insertions(+), 9 deletions(-) diff --git a/doc/muxers.texi b/doc/muxers.texi index c36c72c..6fa9054 100644 --- a/doc/muxers.texi +++ b/doc/muxers.texi @@ -1367,6 +1367,12 @@ Select the streams that should be mapped to the slave output, specified by a stream specifier. If not specified, this defaults to all the input streams. You may use multiple stream specifiers separated by commas (@code{,}) e.g.: @code{a:0,v} + +@item onfail +Specify behaviour on output failure. This can be set to either 'abort' (which is +default) or 'ignore'. 'abort' will cause whole process to fail in case of failure +on this slave output. 'ignore' will ignore failure on this output, so other outputs +will continue without being affected. @end table @subsection Examples @@ -1381,6 +1387,14 @@ ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a @end example @item +As above, but continue streaming even if output to local file fails +(for example local drive fills up): +@example +ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a + "[onfail=ignore]archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/" +@end example + +@item Use @command{ffmpeg} to encode the input, and send the output to three different destinations. The @code{dump_extra} bitstream filter is used to add extradata information to all the output video diff --git a/libavformat/tee.c b/libavformat/tee.c index e43ef08..a937efc 100644 --- a/libavformat/tee.c +++ b/libavformat/tee.c @@ -29,10 +29,20 @@ #define MAX_SLAVES 16 +typedef enum { +ON_SLAVE_FAILURE_ABORT = 1, +ON_SLAVE_FAILURE_IGNORE = 2 +} SlaveFailurePolicy; + +#define DEFAULT_SLAVE_FAILURE_POLICY ON_SLAVE_FAILURE_ABORT + typedef struct { AVFormatContext *avf; AVBitStreamFilterContext **bsfs; ///< bitstream filters per stream +SlaveFailurePolicy on_fail; +unsigned char is_alive; + /** map from input to output streams indexes, * disabled output streams are set to -1 */ int *stream_map; @@ -41,6 +51,7 @@ typedef struct { typedef struct TeeContext { const AVClass *class; unsigned nb_slaves; +unsigned nb_alive; TeeSlave slaves[MAX_SLAVES]; } TeeContext; @@ -135,6 +146,18 @@ end: return ret; } +static inline int parse_slave_failure_policy_option(const char * opt) +{ +if (!opt) { +return DEFAULT_SLAVE_FAILURE_POLICY; +} else if (!av_strcasecmp("abort",opt)) { +return ON_SLAVE_FAILURE_ABORT; +} else if (!av_strcasecmp("ignore",opt)) { +return ON_SLAVE_FAILURE_IGNORE; +} +return 0; +} + static void close_slave(TeeSlave* tee_slave) { AVFormatContext * avf; @@ -176,7 +199,7 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) AVDictionary *options = NULL; AVDictionaryEntry *entry; char *filename; -char *format = NULL, *select = NULL; +char *format = NULL, *select = NULL, *on_fail = NULL; AVFormatContext *avf2 = NULL; AVStream *st, *st2; int stream_count; @@ -196,6 +219,17 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) STEAL_OPTION("f", format); STEAL_OPTION("select", select); +STEAL_OPTION("onfail", on_fail); + +tee_slave->on_fail = (SlaveFailurePolicy) parse_slave_failure_policy_option(on_fail); +if (!tee_slave->on_fail) { +av_log(avf, AV_LOG_ERROR, +"Invalid onfail option value, valid options are 'abort' and 'ignore'\n"); +ret = AVERROR(EINVAL); +/// Set failure behaviour to abort, so invalid option error will not be ignored +tee_slave->on_fail = ON_SLAVE_FAILURE_ABORT; +goto end; +} ret = avformat_alloc_output_context2(&avf2, NULL, format, filename); if (ret < 0) @@ -345,6 +379,7 @@ end: } av_free(format); av_free(select); +av_free(on_fail); av_dict_free(&options); av_freep(&tmp_select); return ret; @@ -374,6 +409,31 @@ static void log_slave(TeeSlave *slave, void *log_ctx, int log_level) } } +static int tee_process_slave_failure(AVFormatContext * avf,unsigned slave_idx, +int err_n,unsigned char needs_closing) +{ +TeeContext *tee = avf->priv_data; +TeeSlave *tee_slave = &tee->slaves[slave_idx]; + +tee_slave->is_alive = 0; +tee->nb_alive--; + +if (needs_closing) +close_slave(tee_slave); + +if ( !tee->nb_alive ) { +av_log(avf, AV_LOG_ERROR, "All tee outputs failed.\n"); +return err_n; +} else if (tee_slave->on_fail == ON_SLAVE_FAILURE_ABORT ) { +av_log(avf, AV_LOG_ERROR, "Slave muxer #%u failed,aborting.\n", slave_idx + 1); +return err_n; +} else { +a
[FFmpeg-devel] [PATCH v2 2/3] Fix leaks in tee muxer when open_slave fails
Calling close_slave in case error is to be returned from open_slave will free allocated resources. Since failure can happen before bsfs array is initialized, close_slave must check that bsfs is not NULL before accessing tee_slave->bsfs[i] element. Signed-off-by: Jan Sebechlebsky --- libavformat/tee.c | 22 ++ 1 file changed, 14 insertions(+), 8 deletions(-) diff --git a/libavformat/tee.c b/libavformat/tee.c index 09551b3..e43ef08 100644 --- a/libavformat/tee.c +++ b/libavformat/tee.c @@ -141,12 +141,14 @@ static void close_slave(TeeSlave* tee_slave) unsigned i; avf = tee_slave->avf; -for (i=0; i < avf->nb_streams; ++i) { -AVBitStreamFilterContext *bsf_next, *bsf = tee_slave->bsfs[i]; -while (bsf) { -bsf_next = bsf->next; -av_bitstream_filter_close(bsf); -bsf = bsf_next; +if (tee_slave->bsfs) { +for (i=0; i < avf->nb_streams; ++i) { +AVBitStreamFilterContext *bsf_next, *bsf = tee_slave->bsfs[i]; +while (bsf) { +bsf_next = bsf->next; +av_bitstream_filter_close(bsf); +bsf = bsf_next; +} } } av_freep(&tee_slave->stream_map); @@ -198,6 +200,7 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) ret = avformat_alloc_output_context2(&avf2, NULL, format, filename); if (ret < 0) goto end; +tee_slave->avf = avf2; av_dict_copy(&avf2->metadata, avf->metadata, 0); avf2->opaque = avf->opaque; avf2->io_open = avf->io_open; @@ -277,7 +280,6 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) goto end; } -tee_slave->avf = avf2; tee_slave->bsfs = av_calloc(avf2->nb_streams, sizeof(TeeSlave)); if (!tee_slave->bsfs) { ret = AVERROR(ENOMEM); @@ -292,7 +294,8 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) av_log(avf, AV_LOG_ERROR, "Specifier separator in '%s' is '%c', but only characters '%s' " "are allowed\n", entry->key, *spec, slave_bsfs_spec_sep); -return AVERROR(EINVAL); +ret = AVERROR(EINVAL); +goto end; } spec++; /* consume separator */ } @@ -337,6 +340,9 @@ static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) } end: +if ( ret < 0 ){ +close_slave(tee_slave); +} av_free(format); av_free(select); av_dict_free(&options); -- 1.9.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH v2 1/3] Refactor close_slaves function in tee muxer
Closing single slave operation is pulled out into separate function close_slave(TeeSlave*). Both close_slave and close_slaves function are moved before open_slave function. Signed-off-by: Jan Sebechlebsky --- libavformat/tee.c | 59 +++ 1 file changed, 33 insertions(+), 26 deletions(-) diff --git a/libavformat/tee.c b/libavformat/tee.c index 1390705..09551b3 100644 --- a/libavformat/tee.c +++ b/libavformat/tee.c @@ -135,6 +135,39 @@ end: return ret; } +static void close_slave(TeeSlave* tee_slave) +{ +AVFormatContext * avf; +unsigned i; + +avf = tee_slave->avf; +for (i=0; i < avf->nb_streams; ++i) { +AVBitStreamFilterContext *bsf_next, *bsf = tee_slave->bsfs[i]; +while (bsf) { +bsf_next = bsf->next; +av_bitstream_filter_close(bsf); +bsf = bsf_next; +} +} +av_freep(&tee_slave->stream_map); +av_freep(&tee_slave->bsfs); + +ff_format_io_close(avf,&avf->pb); +avformat_free_context(avf); +tee_slave->avf = NULL; +} + +static void close_slaves(AVFormatContext *avf) +{ +TeeContext *tee = avf->priv_data; +unsigned i; + +for (i = 0; i < tee->nb_slaves; i++) { +if (tee->slaves[i].is_alive) +close_slave(&tee->slaves[i]); +} +} + static int open_slave(AVFormatContext *avf, char *slave, TeeSlave *tee_slave) { int i, ret; @@ -311,32 +344,6 @@ end: return ret; } -static void close_slaves(AVFormatContext *avf) -{ -TeeContext *tee = avf->priv_data; -AVFormatContext *avf2; -unsigned i, j; - -for (i = 0; i < tee->nb_slaves; i++) { -avf2 = tee->slaves[i].avf; - -for (j = 0; j < avf2->nb_streams; j++) { -AVBitStreamFilterContext *bsf_next, *bsf = tee->slaves[i].bsfs[j]; -while (bsf) { -bsf_next = bsf->next; -av_bitstream_filter_close(bsf); -bsf = bsf_next; -} -} -av_freep(&tee->slaves[i].stream_map); -av_freep(&tee->slaves[i].bsfs); - -ff_format_io_close(avf2, &avf2->pb); -avformat_free_context(avf2); -tee->slaves[i].avf = NULL; -} -} - static void log_slave(TeeSlave *slave, void *log_ctx, int log_level) { int i; -- 1.9.1 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On 23 March 2016 at 22:35, Aaron Boxer wrote: > Back to my original point, what is the reasoning not to just switch to > OpenJPEG? Both OpenJPEG 1 and 2 are supported to add as external libraries in FFmpeg. What do you mean by switching to OpenJPEG? ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Implement hdcd filtering
Benjamin St gmail.com> writes: > This patch applies filtering/decoding for hdcds(see > ticket #4441) I tried to test with the files sample.cdda.flac and sample.hdcd.flac attached in the ticket. The output is different and one possible reason is the usage of floating point arithmetic in the filter. Since the input and output formats are integer formats, can't the usage of floats be avoided? Carl Eugen ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] lavf/segment: support automatic bitstream filtering
On Wed, Mar 23, 2016 at 09:29:38AM -0500, Rodger Combs wrote: > Most useful for MPEG-TS. Works by having the underlying muxer configure the > bitstream filters, then moving them to our own AVStreams. > --- > libavformat/segment.c | 43 ++- > 1 file changed, 38 insertions(+), 5 deletions(-) this seems to break fate-filter-hls [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB The greatest way to live with honor in this world is to be what we pretend to be. -- Socrates signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 9:48 PM, Ricardo Constantino wrote: > On 23 March 2016 at 22:35, Aaron Boxer wrote: > > Back to my original point, what is the reasoning not to just switch to > > OpenJPEG? > Both OpenJPEG 1 and 2 are supported to add as external libraries in > FFmpeg. What do you mean by switching to OpenJPEG? > I mean abandoning FFmpeg j2k codec, which seems to be a less-featureful copy of OpenJPEG, and putting resources into fixing OpenJPEG issues and making it better. Since OpenJPEG has a much broader user community, this would help both FFmpeg users and many others. I'm not going to post any more on this topic - it is getting a bit boring and seems to be raising some people's blood pressure. But, I think the code attribution issue needs to be dealt with by someone. Bye, Aaron > ___ > ffmpeg-devel mailing list > ffmpeg-devel@ffmpeg.org > http://ffmpeg.org/mailman/listinfo/ffmpeg-devel > ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 1/2] Refactor libavutil/parseutils.c
On Wed, Mar 23, 2016 at 03:32:56PM +, Petru Rares Sincraian wrote: > All tests were in the main method which produces a long main. Now, each test > is in his own method. > > I think this produces a more clear code and follows more with the main > priority of FFmpeg "simplicity and small code size" > --- > libavutil/parseutils.c | 338 > + > 1 file changed, 175 insertions(+), 163 deletions(-) applied for reference the change without whitespace differences: diff --git a/libavutil/parseutils.c b/libavutil/parseutils.c index 0097bec..43bd4eb 100644 --- a/libavutil/parseutils.c +++ b/libavutil/parseutils.c @@ -749,9 +749,7 @@ static uint32_t av_get_random_seed_deterministic(void) return randomv = randomv * 1664525 + 1013904223; } -int main(void) -{ -printf("Testing av_parse_video_rate()\n"); +static void test_av_parse_video_rate(void) { int i; static const char *const rates[] = { @@ -791,7 +789,7 @@ int main(void) } } -printf("\nTesting av_parse_color()\n"); +static void test_av_parse_color(void) { int i; uint8_t rgba[4]; @@ -845,7 +843,7 @@ int main(void) } } -printf("\nTesting av_small_strptime()\n"); +static void test_av_small_strptime(void) { int i; struct tm tm = { 0 }; @@ -874,7 +872,7 @@ int main(void) } } -printf("\nTesting av_parse_time()\n"); +static void test_av_parse_time(void) { int i; int64_t tv; @@ -924,6 +922,20 @@ int main(void) } } +int main(void) +{ +printf("Testing av_parse_video_rate()\n"); +test_av_parse_video_rate(); + +printf("\nTesting av_parse_color()\n"); +test_av_parse_color(); + +printf("\nTesting av_small_strptime()\n"); +test_av_small_strptime(); + +printf("\nTesting av_parse_time()\n"); +test_av_parse_time(); + return 0; } [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Democracy is the form of government in which you can choose your dictator signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH 2/2] Added more tests to libavutil/parseutils.c
On Wed, Mar 23, 2016 at 03:33:02PM +, Petru Rares Sincraian wrote: > > - Added tests for av_find_info_tag(). > - Added test for av_get_known_color_name() > --- > libavutil/parseutils.c| 37 > tests/ref/fate/parseutils | 151 > ++ > 2 files changed, 188 insertions(+) > > diff --git a/libavutil/parseutils.c b/libavutil/parseutils.c > index 43bd4eb..a782ef6 100644 > --- a/libavutil/parseutils.c > +++ b/libavutil/parseutils.c > @@ -922,6 +922,38 @@ static void test_av_parse_time(void) > } > } > > +static void test_av_get_known_color_name(void) > +{ > +int i; > +const uint8_t *rgba; > +const char *color; > + > +for (i = 0; i < FF_ARRAY_ELEMS(color_table); ++i) { > +color = av_get_known_color_name(i, &rgba); > +if (color) { > +printf("%s -> R(%d) G(%d) B(%d) A(%d)\n", > +color, rgba[0], rgba[1], rgba[2], rgba[3]); > +} > +else that code looks oddly formated > +printf("Color ID: %d not found\n", i); > +} > +} > + > +static void test_av_find_info_tag(void) > +{ > +char args[] = "?tag1=val1&tag2=val2&tag3=val3&tag41=value > 41&tag42=random1"; > +const char *tags[] = {"tag1", "tag2", "tag3", "tag4", "tag41", "41", > "random1"}; static const [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB No human being will ever know the Truth, for even if they happen to say it by chance, they would not even known they had done so. -- Xenophanes signature.asc Description: Digital signature ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] [PATCH] Add new test for libavutil/mastering_display_metadata
On Wed, Mar 23, 2016 at 02:37:31PM +, Petru Rares Sincraian wrote: > If I understand well you want me to do a test that given a multimedia file > test if it loads works ok. If so how can I get a simple multimedia file? > i dont know, but there are probably 3 ways or so 1. search for such a file 2. create such a file 3. ask others where to find one for all these the first step is probably to identify what exactly it is that the file needs to have to trigger the code > I see this tests more like a unitary tests. In my opinion, test if > loading/storing to AVFrame need to be done in AVFrame. i dont mind, it just seems to me the amount of code under test is disproportionally small compared to the code using to test it > > Thanks :) > > From: ffmpeg-devel on behalf of Michael > Niedermayer > Sent: Tuesday, March 22, 2016 4:17 PM > To: FFmpeg development discussions and patches > Subject: Re: [FFmpeg-devel] [PATCH] Add new test for > libavutil/mastering_display_metadata > > On Tue, Mar 22, 2016 at 01:30:11PM +, Petru Rares Sincraian wrote: > > > > Hi there, > > > > I added a set of tests for libavutil/mastering_display_metadata module. I > > attached the patch in this message. > > > > > > Thanks, > > Petru Rares. > > > libavutil/Makefile|1 > > libavutil/mastering_display_metadata.c| 68 > > ++ > > libavutil/mastering_display_metadata.h|1 > > tests/fate/libavutil.mak |4 + > > tests/ref/fate/mastering_display_metadata |1 > > 5 files changed, 74 insertions(+), 1 deletion(-) > > 3125db1eb98ac3ad3393e88613a90af79ae812b1 > > 0001-Add-selftest-to-libavutil-mastering_display_metadata.patch > > From 1e502305f098c9aef852e19e91ddee831cc5ebaf Mon Sep 17 00:00:00 2001 > > From: Petru Rares Sincraian > > Date: Tue, 22 Mar 2016 11:39:08 +0100 > > Subject: [PATCH] Add selftest to libavutil/mastering_display_metadata > > > > This commit adds tests for functions of > > libavutil/mastering_display_metadata.c > > --- > > libavutil/Makefile| 1 + > > libavutil/mastering_display_metadata.c| 68 > > +++ > > libavutil/mastering_display_metadata.h| 1 + > > tests/fate/libavutil.mak | 4 ++ > > tests/ref/fate/mastering_display_metadata | 0 > > 5 files changed, 74 insertions(+) > > create mode 100644 tests/ref/fate/mastering_display_metadata > > > > diff --git a/libavutil/Makefile b/libavutil/Makefile > > index 58df75a..3d89335 100644 > > --- a/libavutil/Makefile > > +++ b/libavutil/Makefile > > @@ -198,6 +198,7 @@ TESTPROGS = adler32 > > \ > > parseutils \ > > pixdesc \ > > pixelutils \ > > +mastering_display_metadata \ > > random_seed \ > > rational\ > > ripemd \ > > diff --git a/libavutil/mastering_display_metadata.c > > b/libavutil/mastering_display_metadata.c > > index e1683e5..8c264a2 100644 > > --- a/libavutil/mastering_display_metadata.c > > +++ b/libavutil/mastering_display_metadata.c > > @@ -41,3 +41,71 @@ AVMasteringDisplayMetadata > > *av_mastering_display_metadata_create_side_data(AVFra > > > > return (AVMasteringDisplayMetadata *)side_data->data; > > } > > + > > +#ifdef TEST > > + > > +static int check_alloc(void) > > +{ > > +int result = 0; > > +AVMasteringDisplayMetadata *original = > > av_mastering_display_metadata_alloc(); > > + > > +if (original == NULL) { > > +printf("Failed to allocate display metadata\n"); > > +result = 1; > > +} > > + > > +if (original) > > +av_freep(original); > > + > > +return result; > > +} > > + > > +static int check_create_side_data(void) > > +{ > > +int result = 0; > > +AVFrame *frame = av_frame_alloc(); > > +AVMasteringDisplayMetadata *metadata; > > +AVFrameSideData *side_data; > > + > > +if (frame == NULL) { > > +printf("Failed to allocate frame"); > > +result = 1; > > +goto end; > > +} > > + > > +metadata = av_mastering_display_metadata_create_side_data(frame); > > +if (metadata == NULL) { > > +printf("Failed to create display metadata frame side data"); > > +result = 1; > > +goto end; > > +} > > + > > +side_data = av_frame_get_side_data(frame, > > AV_FRAME_DATA_MASTERING_DISPLAY_METADATA); > > +if (side_data == NULL) { > > +printf("Failed to get frame side data"); > > +result = 1; > > +goto en
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 06:35:20PM -0400, Aaron Boxer wrote: > On Wed, Mar 23, 2016 at 3:59 PM, Michael Bradshaw wrote: > > > On Wed, Mar 23, 2016 at 12:24 PM, Aaron Boxer wrote: > > > Hello Again, > > > > > > I took a look at the FFmpeg j2k code. Now, I've worked with OpenJPEG for > > > many years, and I would say at least 20% of the code in FFmpeg was either > > > directly copied from OpenJPEG, or is very similar to OpenJPEG code. > > > > > > I think the people who did the work on the FFmpeg codec would readily > > admit > > > that they copied a certain amount directly from the other project. > > > > > > So, I think that the OpenJPEG BSD license should appear on those files > > with > > > copied code from OpenJPEG, to comply with the BSD license. I can list > > some > > > of the files (there aren't many) if people are interested. > > > > Go ahead and list the files and sections of code you're concerned > > about (or the commits that introduced the code). Chances are that's > > going to come up in the thread anyway, so might as well do it from the > > get-go. > > > > Here is the most obvious example: > > j2kenc.c (from FFmpeg) > > static int getnmsedec_sig(int x, int bpno) > { > if (bpno > NMSEDEC_FRACBITS) > return lut_nmsedec_sig[(x >> (bpno - NMSEDEC_FRACBITS)) & ((1 << > NMSEDEC_BITS) - 1)]; > return lut_nmsedec_sig0[x & ((1 << NMSEDEC_BITS) - 1)]; > } > > static int getnmsedec_ref(int x, int bpno) > { > if (bpno > NMSEDEC_FRACBITS) > return lut_nmsedec_ref[(x >> (bpno - NMSEDEC_FRACBITS)) & ((1 << > NMSEDEC_BITS) - 1)]; > return lut_nmsedec_ref0[x & ((1 << NMSEDEC_BITS) - 1)]; > } > > // > > t1.c (from OpenJPEG) > > int16_t opj_t1_getnmsedec_sig(uint32_t x, uint32_t bitpos) > { > if (bitpos > 0) { > return lut_nmsedec_sig[(x >> (bitpos)) & ((1 << T1_NMSEDEC_BITS) - > 1)]; > } > > return lut_nmsedec_sig0[x & ((1 << T1_NMSEDEC_BITS) - 1)]; > } > > int16_t opj_t1_getnmsedec_ref(uint32_t x, uint32_t bitpos) > { > if (bitpos > 0) { > return lut_nmsedec_ref[(x >> (bitpos)) & ((1 << T1_NMSEDEC_BITS) - > 1)]; > } > > return lut_nmsedec_ref0[x & ((1 << T1_NMSEDEC_BITS) - 1)]; > } > > Hmm.. Looks a lot more convincing if comparing against the 2007 version of OpenJPEG, which also then gives a more precise attribution list to use: http://ghostscript.com/~tor/gs-browse/gs/openjpeg/libopenjpeg/t1.c ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
Re: [FFmpeg-devel] FFmpeg code Attribution
On Wed, Mar 23, 2016 at 10:50:06PM -0400, Aaron Boxer wrote: > On Wed, Mar 23, 2016 at 9:48 PM, Ricardo Constantino > wrote: > > > On 23 March 2016 at 22:35, Aaron Boxer wrote: > > > Back to my original point, what is the reasoning not to just switch to > > > OpenJPEG? > > Both OpenJPEG 1 and 2 are supported to add as external libraries in > > FFmpeg. What do you mean by switching to OpenJPEG? > > > > > I mean abandoning FFmpeg j2k codec, which seems to be a less-featureful > copy of OpenJPEG, > and putting resources into fixing OpenJPEG issues and making it better. > Since OpenJPEG > has a much broader user community, this would help both FFmpeg users and > many others. I wonder if you mean only the encoder or also the decoder... In general: competition and alternatives are good. Every standard should have multiple viable implementations. Of course if much code is shared/copied that weakens the argument a lot. However when it comes to decoders I do consider it important for FFmpeg to have its own implementation even if there are such shortcomings. If for no other reason that having all implementations in a shared code base, with shared concepts that allows to compare and find common approaches much more easily seems a very important thing to me which nobody else provides. Every external codec re-invents their way of writing bitstreams, VLC codes, ... making it hard to impossible to share code or even concepts. Plus there is a good chance that FFmpeg will still be maintained by the time quite a few of those external libraries have become unmaintained and suffered of bitrot. In some ways I think I'd consider sharing test vectors a possibly more important way of cooperating with other projects than sharing code. ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 1/4] lavc/audiotoolboxenc: remove unneeded packet metadata
This isn't necessary here, and for some reason broke only multichannel AAC encoding when a channel layout tag was set. --- libavcodec/audiotoolboxenc.c | 16 +++- 1 file changed, 3 insertions(+), 13 deletions(-) diff --git a/libavcodec/audiotoolboxenc.c b/libavcodec/audiotoolboxenc.c index cb53f2a..fde7512 100644 --- a/libavcodec/audiotoolboxenc.c +++ b/libavcodec/audiotoolboxenc.c @@ -38,7 +38,6 @@ typedef struct ATDecodeContext { int quality; AudioConverterRef converter; -AudioStreamPacketDescription pkt_desc; AVFrame in_frame; AVFrame new_in_frame; @@ -310,10 +309,6 @@ static OSStatus ffat_encode_callback(AudioConverterRef converter, UInt32 *nb_pac if (at->eof) { *nb_packets = 0; -if (packets) { -*packets = &at->pkt_desc; -at->pkt_desc.mDataByteSize = 0; -} return 0; } @@ -326,18 +321,13 @@ static OSStatus ffat_encode_callback(AudioConverterRef converter, UInt32 *nb_pac } data->mNumberBuffers = 1; -data->mBuffers[0].mNumberChannels = 0; +data->mBuffers[0].mNumberChannels = avctx->channels; data->mBuffers[0].mDataByteSize = at->in_frame.nb_samples * av_get_bytes_per_sample(avctx->sample_fmt) * avctx->channels; data->mBuffers[0].mData = at->in_frame.data[0]; -*nb_packets = (at->in_frame.nb_samples + (at->frame_size - 1)) / at->frame_size; - -if (packets) { -*packets = &at->pkt_desc; -at->pkt_desc.mDataByteSize = data->mBuffers[0].mDataByteSize; -at->pkt_desc.mVariableFramesInPacket = at->in_frame.nb_samples; -} +if (*nb_packets > at->in_frame.nb_samples) +*nb_packets = at->in_frame.nb_samples; return 0; } -- 2.7.3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 3/4] lavc/audiotoolboxdec: support ADTS AAC input
--- libavcodec/audiotoolboxdec.c | 35 ++- 1 file changed, 34 insertions(+), 1 deletion(-) diff --git a/libavcodec/audiotoolboxdec.c b/libavcodec/audiotoolboxdec.c index 270e07f..1fa6f16 100644 --- a/libavcodec/audiotoolboxdec.c +++ b/libavcodec/audiotoolboxdec.c @@ -37,6 +37,7 @@ typedef struct ATDecodeContext { AudioStreamPacketDescription pkt_desc; AVPacket in_pkt; AVPacket new_in_pkt; +AVBitStreamFilterContext *bsf; unsigned pkt_size; int64_t last_pts; @@ -233,6 +234,8 @@ static int ffat_decode(AVCodecContext *avctx, void *data, { ATDecodeContext *at = avctx->priv_data; AVFrame *frame = data; +int pkt_size = avpkt->size; +AVPacket filtered_packet; OSStatus ret; AudioBufferList out_buffers = { @@ -245,11 +248,41 @@ static int ffat_decode(AVCodecContext *avctx, void *data, } }; +if (avctx->codec_id == AV_CODEC_ID_AAC && avpkt->size > 2 && +(AV_RB16(avpkt->data) & 0xfff0) == 0xfff0) { +int first = 0; +uint8_t *p_filtered = NULL; +int n_filtered = 0; +if (!at->bsf) { +first = 1; +if(!(at->bsf = av_bitstream_filter_init("aac_adtstoasc"))) +return AVERROR(ENOMEM); +} + +ret = av_bitstream_filter_filter(at->bsf, avctx, NULL, &p_filtered, &n_filtered, + avpkt->data, avpkt->size, 0); +if (ret >= 0 && p_filtered != avpkt->data) { +filtered_packet = *avpkt; +avpkt = &filtered_packet; +avpkt->data = p_filtered; +avpkt->size = n_filtered; +} + +if (first) { +if ((ret = ffat_set_extradata(avctx)) < 0) +return ret; +ffat_update_ctx(avctx); +out_buffers.mBuffers[0].mNumberChannels = avctx->channels; +out_buffers.mBuffers[0].mDataByteSize = av_get_bytes_per_sample(avctx->sample_fmt) * at->pkt_size * avctx->channels; +} +} + av_packet_unref(&at->new_in_pkt); if (avpkt->size) { if ((ret = av_packet_ref(&at->new_in_pkt, avpkt)) < 0) return ret; +at->new_in_pkt.data = avpkt->data; } else { at->eof = 1; } @@ -275,7 +308,7 @@ static int ffat_decode(AVCodecContext *avctx, void *data, at->last_pts = avpkt->pts; } -return avpkt->size; +return pkt_size; } static av_cold void ffat_decode_flush(AVCodecContext *avctx) -- 2.7.3 ___ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
[FFmpeg-devel] [PATCH 4/4] lavc/audiotoolboxdec: fix a number of config and timestamp issues
- ADTS-formatted AAC didn't work - Channel layouts were never exported - Channel mappings were incorrect beyond stereo - Channel counts weren't updated after packets were decoded - Timestamps were exported incorrectly --- libavcodec/audiotoolboxdec.c | 248 --- 1 file changed, 185 insertions(+), 63 deletions(-) diff --git a/libavcodec/audiotoolboxdec.c b/libavcodec/audiotoolboxdec.c index 1fa6f16..d95fc0f 100644 --- a/libavcodec/audiotoolboxdec.c +++ b/libavcodec/audiotoolboxdec.c @@ -39,7 +39,6 @@ typedef struct ATDecodeContext { AVPacket new_in_pkt; AVBitStreamFilterContext *bsf; -unsigned pkt_size; int64_t last_pts; int eof; } ATDecodeContext; @@ -81,20 +80,126 @@ static UInt32 ffat_get_format_id(enum AVCodecID codec, int profile) } } -static void ffat_update_ctx(AVCodecContext *avctx) +static int ffat_get_channel_id(AudioChannelLabel label) +{ +if (label == 0) +return -1; +else if (label <= kAudioChannelLabel_LFEScreen) +return label - 1; +else if (label <= kAudioChannelLabel_RightSurround) +return label + 4; +else if (label <= kAudioChannelLabel_CenterSurround) +return label + 1; +else if (label <= kAudioChannelLabel_RightSurroundDirect) +return label + 23; +else if (label <= kAudioChannelLabel_TopBackRight) +return label - 1; +else if (label < kAudioChannelLabel_RearSurroundLeft) +return -1; +else if (label <= kAudioChannelLabel_RearSurroundRight) +return label - 29; +else if (label <= kAudioChannelLabel_RightWide) +return label - 4; +else if (label == kAudioChannelLabel_LFE2) +return ff_ctzll(AV_CH_LOW_FREQUENCY_2); +else if (label == kAudioChannelLabel_Mono) +return ff_ctzll(AV_CH_FRONT_CENTER); +else +return -1; +} + +static int ffat_compare_channel_descriptions(const void* a, const void* b) +{ +const AudioChannelDescription* da = a; +const AudioChannelDescription* db = b; +return ffat_get_channel_id(da->mChannelLabel) - ffat_get_channel_id(db->mChannelLabel); +} + +static AudioChannelLayout *ffat_convert_layout(AudioChannelLayout *layout, UInt32* size) +{ +AudioChannelLayoutTag tag = layout->mChannelLayoutTag; +AudioChannelLayout *new_layout; +if (tag == kAudioChannelLayoutTag_UseChannelDescriptions) +return layout; +else if (tag == kAudioChannelLayoutTag_UseChannelBitmap) +AudioFormatGetPropertyInfo(kAudioFormatProperty_ChannelLayoutForBitmap, + sizeof(UInt32), &layout->mChannelBitmap, size); +else +AudioFormatGetPropertyInfo(kAudioFormatProperty_ChannelLayoutForTag, + sizeof(AudioChannelLayoutTag), &tag, size); +new_layout = av_malloc(*size); +if (!new_layout) { +av_free(layout); +return NULL; +} +if (tag == kAudioChannelLayoutTag_UseChannelBitmap) +AudioFormatGetProperty(kAudioFormatProperty_ChannelLayoutForBitmap, + sizeof(UInt32), &layout->mChannelBitmap, size, new_layout); +else +AudioFormatGetProperty(kAudioFormatProperty_ChannelLayoutForTag, + sizeof(AudioChannelLayoutTag), &tag, size, new_layout); +new_layout->mChannelLayoutTag = kAudioChannelLayoutTag_UseChannelDescriptions; +av_free(layout); +return new_layout; +} + +static int ffat_update_ctx(AVCodecContext *avctx) { ATDecodeContext *at = avctx->priv_data; -AudioStreamBasicDescription in_format; -UInt32 size = sizeof(in_format); +AudioStreamBasicDescription format; +UInt32 size = sizeof(format); if (!AudioConverterGetProperty(at->converter, kAudioConverterCurrentInputStreamDescription, - &size, &in_format)) { -avctx->channels = in_format.mChannelsPerFrame; -at->pkt_size = in_format.mFramesPerPacket; + &size, &format)) { +if (format.mSampleRate) +avctx->sample_rate = format.mSampleRate; +avctx->channels = format.mChannelsPerFrame; +avctx->channel_layout = av_get_default_channel_layout(avctx->channels); +avctx->frame_size = format.mFramesPerPacket; } -if (!at->pkt_size) -at->pkt_size = 2048; +if (!AudioConverterGetProperty(at->converter, + kAudioConverterCurrentOutputStreamDescription, + &size, &format)) { +format.mSampleRate = avctx->sample_rate; +format.mChannelsPerFrame = avctx->channels; +AudioConverterSetProperty(at->converter, + kAudioConverterCurrentOutputStreamDescription, + size, &format); +} + +if (!AudioConverterGetPropertyInfo(at->converter, kAudioConverterInputChannelLayout, +
[FFmpeg-devel] [PATCH 2/4] lavc/audiotoolboxenc: fix a number of config issues
- size variables were used in a confusing way - incorrect size var use led to channel layouts not being set properly - channel layouts were incorrectly mapped for >2-channel AAC - bitrates not accepted by the encoder were discarded instead of being clamped - some minor style/indentation fixes --- libavcodec/audiotoolboxenc.c | 194 ++- 1 file changed, 172 insertions(+), 22 deletions(-) diff --git a/libavcodec/audiotoolboxenc.c b/libavcodec/audiotoolboxenc.c index fde7512..f2e3628 100644 --- a/libavcodec/audiotoolboxenc.c +++ b/libavcodec/audiotoolboxenc.c @@ -146,6 +146,86 @@ static int get_ilbc_mode(AVCodecContext *avctx) return 30; } +static av_cold int get_channel_label(int channel) +{ +uint64_t map = 1 << channel; +if (map <= AV_CH_LOW_FREQUENCY) +return channel + 1; +else if (map <= AV_CH_BACK_RIGHT) +return channel + 29; +else if (map <= AV_CH_BACK_CENTER) +return channel - 1; +else if (map <= AV_CH_SIDE_RIGHT) +return channel - 4; +else if (map <= AV_CH_TOP_BACK_RIGHT) +return channel + 1; +else if (map <= AV_CH_STEREO_RIGHT) +return -1; +else if (map <= AV_CH_WIDE_RIGHT) +return channel + 4; +else if (map <= AV_CH_SURROUND_DIRECT_RIGHT) +return channel - 23; +else if (map == AV_CH_LOW_FREQUENCY_2) +return kAudioChannelLabel_LFE2; +else +return -1; +} + +static int remap_layout(AudioChannelLayout *layout, uint64_t in_layout, int count) +{ +int i; +int c = 0; +layout->mChannelLayoutTag = kAudioChannelLayoutTag_UseChannelDescriptions; +layout->mNumberChannelDescriptions = count; +for (i = 0; i < count; i++) { +int label; +while (!(in_layout & (1 << c)) && c < 64) +c++; +if (c == 64) +return AVERROR(EINVAL); // This should never happen +label = get_channel_label(c); +layout->mChannelDescriptions[i].mChannelLabel = label; +if (label < 0) +return AVERROR(EINVAL); +c++; +} +return 0; +} + +static int get_aac_tag(uint64_t in_layout) +{ +switch (in_layout) { +case AV_CH_LAYOUT_MONO: +return kAudioChannelLayoutTag_Mono; +case AV_CH_LAYOUT_STEREO: +return kAudioChannelLayoutTag_Stereo; +case AV_CH_LAYOUT_QUAD: +return kAudioChannelLayoutTag_AAC_Quadraphonic; +case AV_CH_LAYOUT_OCTAGONAL: +return kAudioChannelLayoutTag_AAC_Octagonal; +case AV_CH_LAYOUT_SURROUND: +return kAudioChannelLayoutTag_AAC_3_0; +case AV_CH_LAYOUT_4POINT0: +return kAudioChannelLayoutTag_AAC_4_0; +case AV_CH_LAYOUT_5POINT0: +return kAudioChannelLayoutTag_AAC_5_0; +case AV_CH_LAYOUT_5POINT1: +return kAudioChannelLayoutTag_AAC_5_1; +case AV_CH_LAYOUT_6POINT0: +return kAudioChannelLayoutTag_AAC_6_0; +case AV_CH_LAYOUT_6POINT1: +return kAudioChannelLayoutTag_AAC_6_1; +case AV_CH_LAYOUT_7POINT0: +return kAudioChannelLayoutTag_AAC_7_0; +case AV_CH_LAYOUT_7POINT1_WIDE_BACK: +return kAudioChannelLayoutTag_AAC_7_1; +case AV_CH_LAYOUT_7POINT1: +return kAudioChannelLayoutTag_MPEG_7_1_C; +default: +return 0; +} +} + static av_cold int ffat_init_encoder(AVCodecContext *avctx) { ATDecodeContext *at = avctx->priv_data; @@ -170,11 +250,12 @@ static av_cold int ffat_init_encoder(AVCodecContext *avctx) .mFormatID = ffat_get_format_id(avctx->codec_id, avctx->profile), .mChannelsPerFrame = in_format.mChannelsPerFrame, }; -AudioChannelLayout channel_layout = { -.mChannelLayoutTag = kAudioChannelLayoutTag_UseChannelBitmap, -.mChannelBitmap = avctx->channel_layout, -}; -UInt32 size = sizeof(channel_layout); +UInt32 layout_size = sizeof(AudioChannelLayout) + + sizeof(AudioChannelDescription) * avctx->channels; +AudioChannelLayout *channel_layout = av_malloc(layout_size); + +if (!channel_layout) +return AVERROR(ENOMEM); if (avctx->codec_id == AV_CODEC_ID_ILBC) { int mode = get_ilbc_mode(avctx); @@ -186,22 +267,42 @@ static av_cold int ffat_init_encoder(AVCodecContext *avctx) if (status != 0) { av_log(avctx, AV_LOG_ERROR, "AudioToolbox init error: %i\n", (int)status); +av_free(channel_layout); return AVERROR_UNKNOWN; } -size = sizeof(UInt32); +if ((status = remap_layout(channel_layout, avctx->channel_layout, avctx->channels)) < 0) { +av_log(avctx, AV_LOG_ERROR, "Invalid channel layout\n"); +av_free(channel_layout); +return status; +} -AudioConverterSetProperty(at->converter, kAudioConverterInputChannelLayout, - size, &channel_layout); -AudioConverterSetProperty(at->converter, kAudioConverterOutputChannelLayout, - size, &channel_layout);