[FFmpeg-devel] libzvbi automatically obtain teletext durations

2016-07-13 Thread Lukas
Hello,

I am converting DVB teletext to SRT. As far as I know, libzvbi can only set
txt_duration to a constant number of miliseconds. Then every time subtitles
show, their duration is the same. May we expect an update, which modifies
libzvbi to have default behaviour to obtain original durations from
teletext?

Best regards,
Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] libavcodec/libdav1d: add libdav1d_get_format method to call ff_get_format

2019-04-10 Thread Lukas Rusak
This will allow applications to properly init the decoder in
cases where a hardware decoder is tried first and and software
decoder is tried after by calling the get_format callback.

Even though there is no hardware pixel formats available
we still need to return the software pixel format.

Tested with Kodi by checking if multithreaded software
decoding is properly activated.
---
 libavcodec/libdav1d.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/libavcodec/libdav1d.c b/libavcodec/libdav1d.c
index 30c6eccfef..fa71834543 100644
--- a/libavcodec/libdav1d.c
+++ b/libavcodec/libdav1d.c
@@ -48,6 +48,16 @@ static const enum AVPixelFormat pix_fmt[][3] = {
 [DAV1D_PIXEL_LAYOUT_I444] = { AV_PIX_FMT_YUV444P, AV_PIX_FMT_YUV444P10, 
AV_PIX_FMT_YUV444P12 },
 };
 
+static enum AVPixelFormat libdav1d_get_format(AVCodecContext *avctx, const 
Dav1dPicture *p)
+{
+   enum AVPixelFormat pix_fmts[2], *fmt = pix_fmts;
+
+   *fmt++ = pix_fmt[p->p.layout][p->seq_hdr->hbd];
+   *fmt = AV_PIX_FMT_NONE;
+
+   return ff_get_format(avctx, pix_fmts);
+}
+
 static void libdav1d_log_callback(void *opaque, const char *fmt, va_list vl)
 {
 AVCodecContext *c = opaque;
@@ -214,7 +224,7 @@ static int libdav1d_receive_frame(AVCodecContext *c, 
AVFrame *frame)
 frame->linesize[2] = p->stride[1];
 
 c->profile = p->seq_hdr->profile;
-frame->format = c->pix_fmt = pix_fmt[p->p.layout][p->seq_hdr->hbd];
+frame->format = c->pix_fmt = libdav1d_get_format(c, p);
 frame->width = p->p.w;
 frame->height = p->p.h;
 if (c->width != p->p.w || c->height != p->p.h) {
-- 
2.20.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] libavcodec/libdav1d: add libdav1d_get_format method to call ff_get_format

2019-04-12 Thread Lukas Rusak
On Thu, 2019-04-11 at 21:03 +0200, Hendrik Leppkes wrote:
> On Thu, Apr 11, 2019 at 7:47 PM Peter F 
> wrote:
> > Hi,
> > 
> > Am Do., 11. Apr. 2019 um 00:50 Uhr schrieb Hendrik Leppkes
> > :
> > > On Thu, Apr 11, 2019 at 12:39 AM Lukas Rusak 
> > > wrote:
> > > > This will allow applications to properly init the decoder in
> > > > cases where a hardware decoder is tried first and and software
> > > > decoder is tried after by calling the get_format callback.
> > > > 
> > > > Even though there is no hardware pixel formats available
> > > > we still need to return the software pixel format.
> > > > 
> > > > Tested with Kodi by checking if multithreaded software
> > > > decoding is properly activated.
> > > > ---
> > > >  libavcodec/libdav1d.c | 12 +++-
> > > >  1 file changed, 11 insertions(+), 1 deletion(-)
> > > > 
> > > 
> > > This doesn't make sense to m e. get_format isn't called on a wide
> > > variety of decoders, and is only useful when there is a hardware
> > > format of any kind.
> > > Please elaborate what exactly this is trying to achieve.
> > 
> > Can you elaborate on how to use ffmpeg's API properly as a client
> > to
> > decide if a decoder is a SW decoder and therefore howto properly
> > setup
> > (multi-)threading, especially it cannot be changed once
> > initialized?
> > 
> 
> I think you are approaching this from the wrong side. Even if the
> decoder would support hardware, generally hardware support is
> limited.
> So if someone plays a 10-bit  H264 file, which no consumer hardware
> supports, or even worse, a very high resolution file which is beyond
> hardware limits, do you want to run single-threaded and slow? I hope
> not.
> 
> The way I solve that is to just close the decoder and re-open it if
> needed, so I can change such settings. You also fare much  better if
> you accept that you might  need to hard-code which codecs support
> hardware at all. Considering thats one new one every 4 years or so,
> it'll probably be fine.
> 
> - Hendrik

So why don't the software only formats follow the api and also call
ff_get_format like is done here? That would stop from applications
having to hardcode the hardware decoding formats.

What we do currently is open the codec single threaded to try
hardware formats. The get_format callback will return a hardware format
for the decoder and if not we reopen the codec with multithreading
enabled. Is there a better method to do this initialization?

Regards,
Lukas Rusak


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] libavcodec: v4l2m2m: allow lower minimum buffer values

2020-05-07 Thread Lukas Rusak
There is no reason to enforce a high minimum. In the context
of streaming only a few output buffers and capture buffers
are even needed for continuous playback. This also helps
alleviate memory pressure when decoding 4K media.
---
 libavcodec/v4l2_m2m.h | 2 +-
 libavcodec/v4l2_m2m_dec.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/libavcodec/v4l2_m2m.h b/libavcodec/v4l2_m2m.h
index 61cb919771..feeb162812 100644
--- a/libavcodec/v4l2_m2m.h
+++ b/libavcodec/v4l2_m2m.h
@@ -38,7 +38,7 @@
 
 #define V4L_M2M_DEFAULT_OPTS \
 { "num_output_buffers", "Number of buffers in the output context",\
-OFFSET(num_output_buffers), AV_OPT_TYPE_INT, { .i64 = 16 }, 6, 
INT_MAX, FLAGS }
+OFFSET(num_output_buffers), AV_OPT_TYPE_INT, { .i64 = 16 }, 2, 
INT_MAX, FLAGS }
 
 typedef struct V4L2m2mContext {
 char devname[PATH_MAX];
diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index c6b865fde8..b9725be377 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -262,7 +262,7 @@ static av_cold int v4l2_decode_close(AVCodecContext *avctx)
 static const AVOption options[] = {
 V4L_M2M_DEFAULT_OPTS,
 { "num_capture_buffers", "Number of buffers in the capture context",
-OFFSET(num_capture_buffers), AV_OPT_TYPE_INT, {.i64 = 20}, 20, 
INT_MAX, FLAGS },
+OFFSET(num_capture_buffers), AV_OPT_TYPE_INT, {.i64 = 20}, 2, INT_MAX, 
FLAGS },
 { NULL},
 };
 
-- 
2.24.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] ARM/NEON build error on MSVC after moving to ffmpeg 4.2

2019-09-29 Thread Lukas Fellechner
Hi,

after updating ffmpeg form 4.1.x to n4.2 or n4.2.1, I can no longer build ARM 
or ARM64 versions of ffmpeg with Visual Studio. I always get build errors on 
neon assembly code, like this:

C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\fft_neon.o.asm(811)
 : 
error A2034: unknown opcode: .
.section .rdata

The problem seems to be with a change in libavutil/aarch64/asm.S and 
libavutil/aarch/asm.S.

Changeset 41cf3e3 („arm: Create proper .rdata sections for COFF“) has added the 
following line, which MSVC compiler does not seems to understand.

#elif defined(_WIN32)

.section .rdata


Although it seems that this was added based on feedback from a MS developer, it 
does not seem to work on MSVC compiler toolchain.


Can we perhaps change this to something like this:

#elif defined(_WIN32) && !defined(_MSC_VER)

.section .rdata


Or is there some other way to get this working, without completely disabling 
NEON optimizations?

Thank you, all the best

Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] Fix gas-preprocessor to translate .rdata sections for armasm and armasm64

2019-10-01 Thread Lukas Fellechner
Compiling FFmpeg with gas-preprocessor.pl and armasm or armasm64 fails since 
FFmpeg 4.2.

New .rdata sections have been added in ARM NEON assembly code (e.g. 
libavutil/aarch64/asm.S).
This fix allows gas-preprocessor to translate those sections to armasm 
compatible code.

Gas-preprocessor is maintained in https://github.com/FFmpeg/gas-preprocessor

---
 gas-preprocessor.pl | 1 +
 1 file changed, 1 insertion(+)

diff --git a/gas-preprocessor.pl b/gas-preprocessor.pl
index 743ce45..5d1d50d 100755
--- a/gas-preprocessor.pl
+++ b/gas-preprocessor.pl
@@ -1132,6 +1132,7 @@ sub handle_serialized_line {
 # The alignment in AREA is the power of two, just as .align in gas
 $line =~ s/\.text/AREA |.text|, CODE, READONLY, ALIGN=4, CODEALIGN/;
 $line =~ s/(\s*)(.*)\.rodata/$1AREA |.rodata|, DATA, READONLY, 
ALIGN=5/;
+$line =~ s/(\s*)(.*)\.rdata/$1AREA |.rdata|, DATA, READONLY, ALIGN=5/;
 $line =~ s/\.data/AREA |.data|, DATA, ALIGN=5/;
 }
 if ($as_type eq "armasm" and $arch eq "arm") {
-- 


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] Various build errors with armasm64 and armasm after update to FFmpeg 4.2

2019-10-01 Thread Lukas Fellechner
Hi there,

TL;DR: 

Lots of compile errors when building FFmpeg 4.2 ARM/NEON Code through 
gas-preprocessor and armasm(64). Build target is Windows (UWP) ARM and ARM64. 
X86 and X64 targets are building fine.

Long Version:

I am building FFmpeg with an MSYS2 environment where ARM/NEON assembly code is 
compiled through gas-preprocessor and armasm/armasm64, similar to how it is 
described in this compilation guide: 
https://trac.ffmpeg.org/wiki/CompilationGuide/WinRT.

This has worked very well for quite a long time. But after upgrading to FFmpeg 
4.2, the build fails. A lot of changes and additions have been done for 
ARM/NEON 64-bit, and it looks like many of them are not compatible with 
armasm64. First I had to fix gas-preprocessor, a patch has been submitted 
today. But now, many other errors occur, which have to do with the ARM64 
assembly code itself. I don’t have any knowledge of ARM/NEON assembly code, so 
I cannot help much with the investigation or fixes.

It would be great if ARM/NEON experts here (possibly those who submitted the 
changes) could check the new assembly codes and fix them, so that also armasm64 
will eat it.

On ARM platform, I also see build errors. Interestingly, those files have not 
even changed. Only the referenced file libavutil/arm/asm.S has changed. Even 
when I undo the last changes there, the build still fails, so I am out of ideas 
here right now. When I switch back to FFmpeg 4.1.4, everything builds fine. 
Maybe there is a config change which causes those errors?

Below I will post build errors that I see on ARM and ARM64 builds. Other files 
could be affected as well (build cancels out after a few errors).

Thank you for your help.

Best Regards
Lukas


Errors with ARM64 target:

C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(29)
 : error A2523: operand 5: Wrong size operand
ld1 {v0.4h - v3.4h}, [x1]
CC  libavcodec/aasc.o
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(97)
 : error A2523: operand 5: Wrong size operand
ld1 {v0.8b - v3.8b}, [x1]
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1972)
 : error A2509: operand 3: Illegal reg field
ld1 {v1.1d - v2.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1973)
 : error A2509: operand 3: Illegal reg field
ld1 {v3.1d - v4.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1974)
 : error A2509: operand 3: Illegal reg field
ld1 {v16.1d - v17.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1975)
 : error A2509: operand 3: Illegal reg field
ld1 {v18.1d - v19.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1976)
 : error A2509: operand 3: Illegal reg field
ld1 {v20.1d - v21.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1977)
 : error A2509: operand 3: Illegal reg field
ld1 {v22.1d - v23.1d}, [x2], x3
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(1978)
 : error A2509: operand 3: Illegal reg field
ld1 {v24.1d - v25.1d}, [x2]
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2032)
 : error A2509: operand 3: Illegal reg field
st1 {v1.1d - v2.1d}, [x0], x1
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2033)
 : error A2509: aasc.c
operand 3: Illegal reg field
st1 {v3.1d - v4.1d}, [x0], x1
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2183)
 : error A2506: operand 6: Not enough operands
ld1 {v1.8b - v4.8b}, [x7], #32
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2184)
 : error A2506: operand 6: Not enough operands
ld1 {v16.8b - v19.8b}, [x7], #32
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2185)
 : error A2506: operand 6: Not enough operands
ld1 {v20.8b - v23.8b}, [x7], #32
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2186)
 : error A2509: operand 3: Illegal reg field
ld1 {v24.8b - v25.8b}, [x7]
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2432)
 : error A2506: operand 6: Not enough operands
ld1 {v1.8b - v4.8b}, [x7], #32
C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM64\libavcodec\aarch64\vp8dsp_neon.o.asm(2433)
 : error A2509: operand 4: Illegal reg field
ld1 {v5.8b - v7.8b}, [x7]
C:\Source\FFmpegInterop-lukasf

Re: [FFmpeg-devel] Various build errors with armasm64 and armasmafter update to FFmpeg 4.2

2019-10-03 Thread Lukas Fellechner

> > On Oct 1, 2019, at 23:07, Lukas Fellechner  wrote:
> > 
> > This has worked very well for quite a long time. But after upgrading to 
> > FFmpeg 4.2, the build fails. A lot of changes and additions have been done 
> > for ARM/NEON 64-bit, and it looks like many of them are not compatible with 
> > armasm64. First I had to fix gas-preprocessor, a patch has been submitted 
> > today. But now, many other errors occur, which have to do with the ARM64 
> > assembly code itself. I don’t have any knowledge of ARM/NEON assembly code, 
> > so I cannot help much with the investigation or fixes.
> 
> The issue you posted about, and the other arm64 assembler issue you’ve 
> linked, are already fixed since a very long time in libav’s gas-preprocessor, 
> https://git.libav.org/?p=gas-preprocessor.git;a=summary.

Thank you, I tried with that updated gas-preprocessor and indeed all compile 
errors on ARM64 went away!

> > On ARM platform, I also see build errors. Interestingly, those files have 
> > not even changed. Only the referenced file libavutil/arm/asm.S has changed. 
> > Even when I undo the last changes there, the build still fails, so I am out 
> > of ideas here right now. When I switch back to FFmpeg 4.1.4, everything 
> > builds fine. Maybe there is a config change which causes those errors?

> > Errors with ARM target (32 bit):
> > 
> > C:\Source\FFmpegInterop-lukasf\ffmpeg\Output\Windows10\ARM\libavcodec\arm\ac3dsp_arm.o.asm(72)
> >  : error A2173: syntax error in expression
>it gt
> 
> This seems to be a new regression in armasm in MSVC 2019 16.3 (released a 
> couple weeks ago), see 
> https://developercommunity.visualstudio.com/content/problem/757709/armasm-fails-to-handle-it-instructions.html.
>  I don’t see how it would work for you with an earlier version of FFmpeg 
> though, maybe those files are around from an earlier build and you didn’t try 
> doing a full rebuild?

You are right, I was mainly testing with ARM64, so I did not notice that even 
FFmpeg 4.1.4 fails to build on ARM (32 bit). I need to roll back to the older 
build tools to build ARM platform successfully for now. 

> You can apply 
> https://lists.libav.org/pipermail/libav-devel/2019-October/086581.html on 
> your copy of gas-preprocessor to work around this issue, but I’m not sure if 
> it’s worth keeping the fix permanently (if the bug gets fixed in 16.4).

Thank you!

Best Regards
Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Fix gas-preprocessor to translate .rdatasections for armasm and armasm64

2019-10-03 Thread Lukas Fellechner
> > Compiling FFmpeg with gas-preprocessor.pl and armasm or armasm64 fails 
> > since FFmpeg 4.2.
> > 
> > New .rdata sections have been added in ARM NEON assembly code (e.g. 
> > libavutil/aarch64/asm.S).
> > This fix allows gas-preprocessor to translate those sections to armasm 
> > compatible code.
> > 
> > Gas-preprocessor is maintained in https://github.com/FFmpeg/gas-preprocessor
> > 
> > ---
> > gas-preprocessor.pl | 1 +
> > 1 file changed, 1 insertion(+)
> 
> A fix for this issue, and a lot of other fixes as well not present in the 
> repo referenced above, exist at 
> https://git.libav.org/?p=gas-preprocessor.git;a=summary.
> 
> // Martin

Thank you, indeed the updated preprocessor fixes the build for me. 
Maybe the changes form libav should be merged into the FFmpeg repository then?

Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Re: [FFmpeg-devel] [PATCH] Fix gas-preprocessor to translate.rdatasections for armasm and armasm64

2019-10-04 Thread Lukas Fellechner
> > > > Compiling FFmpeg with gas-preprocessor.pl and armasm or armasm64 fails 
> > > > since FFmpeg 4.2.
> > > > 
> > > > New .rdata sections have been added in ARM NEON assembly code (e.g. 
> > > > libavutil/aarch64/asm.S).
> > > > This fix allows gas-preprocessor to translate those sections to armasm 
> > > > compatible code.
> > > > 
> > > > Gas-preprocessor is maintained in 
> > > > https://github.com/FFmpeg/gas-preprocessor
> > > > 
> > > > ---
> > > > gas-preprocessor.pl | 1 +
> > > > 1 file changed, 1 insertion(+)
> > > 
> > > A fix for this issue, and a lot of other fixes as well not present in the 
> > > repo referenced above, exist at 
> > > https://git.libav.org/?p=gas-preprocessor.git;a=summary.
> > > 
> > > // Martin
> >
> > Thank you, indeed the updated preprocessor fixes the build for me. 
> > Maybe the changes form libav should be merged into the FFmpeg repository 
> > then?
>
> I already merged them before martin replied when i saw your patch 
> i just didnt reply because of migraine ...
> 
> Thx
> 
> Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Thank you Michael, I just compiled successfully with gas-preprocessor from the 
FFmpeg git repo.

All the best!

Lukas 

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

[FFmpeg-devel] [PATCH] libavformat: Add format context parameter to ff_id3v2_read_dict

2017-09-26 Thread Lukas Stabe
The format context (when not NULL) is used to store chapter information,
which was not previously supported by ff_id3v2_read_dict.

This fixes https://trac.ffmpeg.org/ticket/6558
---
 libavformat/hls.c   | 2 +-
 libavformat/id3v2.c | 4 ++--
 libavformat/id3v2.h | 6 --
 libavformat/utils.c | 2 +-
 4 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/libavformat/hls.c b/libavformat/hls.c
index 0995345bbf..f37bfa4e4f 100644
--- a/libavformat/hls.c
+++ b/libavformat/hls.c
@@ -909,7 +909,7 @@ static void parse_id3(AVFormatContext *s, AVIOContext *pb,
 static const char id3_priv_owner_ts[] = 
"com.apple.streaming.transportStreamTimestamp";
 ID3v2ExtraMeta *meta;
 
-ff_id3v2_read_dict(pb, metadata, ID3v2_DEFAULT_MAGIC, extra_meta);
+ff_id3v2_read_dict(NULL, pb, metadata, ID3v2_DEFAULT_MAGIC, extra_meta);
 for (meta = *extra_meta; meta; meta = meta->next) {
 if (!strcmp(meta->tag, "PRIV")) {
 ID3v2ExtraMetaPRIV *priv = meta->data;
diff --git a/libavformat/id3v2.c b/libavformat/id3v2.c
index 05346350ad..2327d93379 100644
--- a/libavformat/id3v2.c
+++ b/libavformat/id3v2.c
@@ -1097,10 +1097,10 @@ static void id3v2_read_internal(AVIOContext *pb, 
AVDictionary **metadata,
 merge_date(metadata);
 }
 
-void ff_id3v2_read_dict(AVIOContext *pb, AVDictionary **metadata,
+void ff_id3v2_read_dict(AVFormatContext *s, AVIOContext *pb, AVDictionary 
**metadata,
 const char *magic, ID3v2ExtraMeta **extra_meta)
 {
-id3v2_read_internal(pb, metadata, NULL, magic, extra_meta, 0);
+id3v2_read_internal(pb, metadata, s, magic, extra_meta, 0);
 }
 
 void ff_id3v2_read(AVFormatContext *s, const char *magic,
diff --git a/libavformat/id3v2.h b/libavformat/id3v2.h
index 9d7bf1c03c..d8768e955a 100644
--- a/libavformat/id3v2.h
+++ b/libavformat/id3v2.h
@@ -97,13 +97,15 @@ int ff_id3v2_tag_len(const uint8_t *buf);
 /**
  * Read an ID3v2 tag into specified dictionary and retrieve supported extra 
metadata.
  *
- * Chapters are not currently read by this variant.
+ * Chapters are not currently only read by this variant when s is not NULL.
  *
  * @param metadata Parsed metadata is stored here
  * @param extra_meta If not NULL, extra metadata is parsed into a list of
  * ID3v2ExtraMeta structs and *extra_meta points to the head of the list
+ * @param s If not NULL, chapter information is stored in the provided context
  */
-void ff_id3v2_read_dict(AVIOContext *pb, AVDictionary **metadata, const char 
*magic, ID3v2ExtraMeta **extra_meta);
+void ff_id3v2_read_dict(AVFormatContext *s, AVIOContext *pb, AVDictionary 
**metadata,
+const char *magic, ID3v2ExtraMeta **extra_meta);
 
 /**
  * Read an ID3v2 tag, including supported extra metadata and chapters.
diff --git a/libavformat/utils.c b/libavformat/utils.c
index 7abca632b5..079a8211d2 100644
--- a/libavformat/utils.c
+++ b/libavformat/utils.c
@@ -588,7 +588,7 @@ int avformat_open_input(AVFormatContext **ps, const char 
*filename,
 
 /* e.g. AVFMT_NOFILE formats will not have a AVIOContext */
 if (s->pb)
-ff_id3v2_read_dict(s->pb, &s->internal->id3v2_meta, 
ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta);
+ff_id3v2_read_dict(s, s->pb, &s->internal->id3v2_meta, 
ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta);
 
 
 if (!(s->flags&AVFMT_FLAG_PRIV_OPT) && s->iformat->read_header)
-- 
2.14.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] libavformat: Add format context parameter to ff_id3v2_read_dict

2017-09-26 Thread Lukas Stabe
> Kind of worried what happens if the ID3 information conflicts with the
> normal container information. As you know, libavformat accepts even mp4
> or mkv files with ID3v2 header.
> 
> Do you think this is a potential issue?

I'm quite new to the ffmpeg source, but if I'm reading things correctly, with 
this patch
the format specific chapters would, depending on how the format sets the 
chapter ids,
override, be appended to or mix with the id3 chapters.

I would not expect to find non-mp3 files with id3 chapter tags in the wild, but 
this patch
would probably misbehave with those.

The proper solution here would probably be to parse the chapters into 
extra_meta in read_chapter
and create another function that parses that into the format context similar to 
how ff_id3v2_parse_apic
works. That's quite a bigger change than this one however, and requires 
touching a bunch of places
(everywhere ff_id3v2_read_dict and ff_id3v2_read are used), so I don't feel 
like I'm familiar enough with
ffmpeg to do that.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] avformat: fix id3 chapters

2017-10-04 Thread Lukas Stabe
These changes store id3 chapter data in ID3v2ExtraMeta and introduce 
ff_id3v2_parse_chapters to parse them into the format context if needed.

Encoders using ff_id3v2_read, which previously parsed chapters into the format 
context automatically, were adjusted to call ff_id3v2_parse_chapters.
---
 libavformat/aiffdec.c  |   3 +-
 libavformat/asfdec_f.c |   4 +-
 libavformat/asfdec_o.c |   4 +-
 libavformat/dsfdec.c   |   4 +-
 libavformat/id3v2.c| 120 -
 libavformat/id3v2.h|  15 +--
 libavformat/iff.c  |   3 +-
 libavformat/omadec.c   |   5 +++
 libavformat/utils.c|   2 +
 9 files changed, 122 insertions(+), 38 deletions(-)

diff --git a/libavformat/aiffdec.c b/libavformat/aiffdec.c
index 2ef9386edb..99e05c78ec 100644
--- a/libavformat/aiffdec.c
+++ b/libavformat/aiffdec.c
@@ -258,7 +258,8 @@ static int aiff_read_header(AVFormatContext *s)
 position = avio_tell(pb);
 ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, size);
 if (id3v2_extra_meta)
-if ((ret = ff_id3v2_parse_apic(s, &id3v2_extra_meta)) < 0) {
+if ((ret = ff_id3v2_parse_apic(s, &id3v2_extra_meta)) < 0 ||
+(ret = ff_id3v2_parse_chapters(s, &id3v2_extra_meta)) < 0) 
{
 ff_id3v2_free_extra_meta(&id3v2_extra_meta);
 return ret;
 }
diff --git a/libavformat/asfdec_f.c b/libavformat/asfdec_f.c
index cc648b9a2f..64a0b9d7f2 100644
--- a/libavformat/asfdec_f.c
+++ b/libavformat/asfdec_f.c
@@ -307,8 +307,10 @@ static void get_id3_tag(AVFormatContext *s, int len)
 ID3v2ExtraMeta *id3v2_extra_meta = NULL;
 
 ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, len);
-if (id3v2_extra_meta)
+if (id3v2_extra_meta) {
 ff_id3v2_parse_apic(s, &id3v2_extra_meta);
+ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
+}
 ff_id3v2_free_extra_meta(&id3v2_extra_meta);
 }
 
diff --git a/libavformat/asfdec_o.c b/libavformat/asfdec_o.c
index 818d6f3573..5122e33c78 100644
--- a/libavformat/asfdec_o.c
+++ b/libavformat/asfdec_o.c
@@ -460,8 +460,10 @@ static void get_id3_tag(AVFormatContext *s, int len)
 ID3v2ExtraMeta *id3v2_extra_meta = NULL;
 
 ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, len);
-if (id3v2_extra_meta)
+if (id3v2_extra_meta) {
 ff_id3v2_parse_apic(s, &id3v2_extra_meta);
+ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
+}
 ff_id3v2_free_extra_meta(&id3v2_extra_meta);
 }
 
diff --git a/libavformat/dsfdec.c b/libavformat/dsfdec.c
index 49ca336427..41538fd83c 100644
--- a/libavformat/dsfdec.c
+++ b/libavformat/dsfdec.c
@@ -53,8 +53,10 @@ static void read_id3(AVFormatContext *s, uint64_t id3pos)
 return;
 
 ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, 0);
-if (id3v2_extra_meta)
+if (id3v2_extra_meta) {
 ff_id3v2_parse_apic(s, &id3v2_extra_meta);
+ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
+}
 ff_id3v2_free_extra_meta(&id3v2_extra_meta);
 }
 
diff --git a/libavformat/id3v2.c b/libavformat/id3v2.c
index f15cefee47..6c216ba7a2 100644
--- a/libavformat/id3v2.c
+++ b/libavformat/id3v2.c
@@ -670,59 +670,68 @@ fail:
 avio_seek(pb, end, SEEK_SET);
 }
 
+static void free_chapter(void *obj)
+{
+ID3v2ExtraMetaCHAP *chap = obj;
+av_freep(&chap->element_id);
+av_dict_free(&chap->meta);
+av_freep(&chap);
+}
+
 static void read_chapter(AVFormatContext *s, AVIOContext *pb, int len, const 
char *ttag, ID3v2ExtraMeta **extra_meta, int isv34)
 {
-AVRational time_base = {1, 1000};
-uint32_t start, end;
-AVChapter *chapter;
-uint8_t *dst = NULL;
 int taglen;
 char tag[5];
+ID3v2ExtraMeta *new_extra = NULL;
+ID3v2ExtraMetaCHAP *chap  = NULL;
 
-if (!s) {
-/* We should probably just put the chapter data to extra_meta here
- * and do the AVFormatContext-needing part in a separate
- * ff_id3v2_parse_apic()-like function. */
-av_log(NULL, AV_LOG_DEBUG, "No AVFormatContext, skipped ID3 chapter 
data\n");
-return;
-}
+new_extra = av_mallocz(sizeof(*new_extra));
+chap  = av_mallocz(sizeof(*chap));
+
+if (!new_extra || !chap)
+goto fail;
+
+if (decode_str(s, pb, 0, &chap->element_id, &len) < 0)
+goto fail;
 
-if (decode_str(s, pb, 0, &dst, &len) < 0)
-return;
 if (len < 16)
-return;
+goto fail;
 
-start = avio_rb32(pb);
-end   = avio_rb32(pb);
+chap->start = avio_rb32(pb);
+chap->end   = avio_rb32(pb);
 avio_skip(pb, 8);
 
-chapter = avpriv_new_chapter(s, s->nb_chapters + 1, time_base, start, end, 
dst);
-if (!chapter) {
-av_free(dst);
-return;
-}
-
 len -= 16;
 while (len > 10) {
 if (avio_read(pb, tag, 4) < 4)
-goto end;
+goto fail;
 tag[4] = 0;
 taglen = avio

Re: [FFmpeg-devel] [PATCH] avformat: fix id3 chapters

2017-10-05 Thread Lukas Stabe

> On 5. Oct 2017, at 09:08, Paul B Mahol  wrote:
> 
> On 10/5/17, Lukas Stabe  wrote:
>> These changes store id3 chapter data in ID3v2ExtraMeta and introduce
>> ff_id3v2_parse_chapters to parse them into the format context if needed.
>> 
>> Encoders using ff_id3v2_read, which previously parsed chapters into the
>> format context automatically, were adjusted to call ff_id3v2_parse_chapters.
>> ---
>> libavformat/aiffdec.c  |   3 +-
>> libavformat/asfdec_f.c |   4 +-
>> libavformat/asfdec_o.c |   4 +-
>> libavformat/dsfdec.c   |   4 +-
>> libavformat/id3v2.c| 120
>> -
>> libavformat/id3v2.h|  15 +--
>> libavformat/iff.c  |   3 +-
>> libavformat/omadec.c   |   5 +++
>> libavformat/utils.c|   2 +
>> 9 files changed, 122 insertions(+), 38 deletions(-)
>> 
> 
> 
> Fix? It wasn't broken at all.

It was. I forgot to link the issue in the commit message, but see here 
https://trac.ffmpeg.org/ticket/6558 for details on how it was broken and when.


signature.asc
Description: Message signed with OpenPGP
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH] avformat: fix id3 chapters

2017-10-05 Thread Lukas Stabe
> On 5. Oct 2017, at 10:51, wm4  wrote:
> 
> On Thu,  5 Oct 2017 03:34:19 +0200
> Lukas Stabe  wrote:
> 
>> These changes store id3 chapter data in ID3v2ExtraMeta and introduce 
>> ff_id3v2_parse_chapters to parse them into the format context if needed.
>> 
>> Encoders using ff_id3v2_read, which previously parsed chapters into the 
>> format context automatically, were adjusted to call ff_id3v2_parse_chapters.
>> ---
>> libavformat/aiffdec.c  |   3 +-
>> libavformat/asfdec_f.c |   4 +-
>> libavformat/asfdec_o.c |   4 +-
>> libavformat/dsfdec.c   |   4 +-
>> libavformat/id3v2.c| 120 
>> -
>> libavformat/id3v2.h|  15 +--
>> libavformat/iff.c  |   3 +-
>> libavformat/omadec.c   |   5 +++
>> libavformat/utils.c|   2 +
>> 9 files changed, 122 insertions(+), 38 deletions(-)
>> 
>> diff --git a/libavformat/aiffdec.c b/libavformat/aiffdec.c
>> index 2ef9386edb..99e05c78ec 100644
>> --- a/libavformat/aiffdec.c
>> +++ b/libavformat/aiffdec.c
>> @@ -258,7 +258,8 @@ static int aiff_read_header(AVFormatContext *s)
>> position = avio_tell(pb);
>> ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, size);
>> if (id3v2_extra_meta)
>> -if ((ret = ff_id3v2_parse_apic(s, &id3v2_extra_meta)) < 0) {
>> +if ((ret = ff_id3v2_parse_apic(s, &id3v2_extra_meta)) < 0 ||
>> +(ret = ff_id3v2_parse_chapters(s, &id3v2_extra_meta)) < 
>> 0) {
>> ff_id3v2_free_extra_meta(&id3v2_extra_meta);
>> return ret;
>> }
>> diff --git a/libavformat/asfdec_f.c b/libavformat/asfdec_f.c
>> index cc648b9a2f..64a0b9d7f2 100644
>> --- a/libavformat/asfdec_f.c
>> +++ b/libavformat/asfdec_f.c
>> @@ -307,8 +307,10 @@ static void get_id3_tag(AVFormatContext *s, int len)
>> ID3v2ExtraMeta *id3v2_extra_meta = NULL;
>> 
>> ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, len);
>> -if (id3v2_extra_meta)
>> +if (id3v2_extra_meta) {
>> ff_id3v2_parse_apic(s, &id3v2_extra_meta);
>> +ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
>> +}
>> ff_id3v2_free_extra_meta(&id3v2_extra_meta);
>> }
>> 
>> diff --git a/libavformat/asfdec_o.c b/libavformat/asfdec_o.c
>> index 818d6f3573..5122e33c78 100644
>> --- a/libavformat/asfdec_o.c
>> +++ b/libavformat/asfdec_o.c
>> @@ -460,8 +460,10 @@ static void get_id3_tag(AVFormatContext *s, int len)
>> ID3v2ExtraMeta *id3v2_extra_meta = NULL;
>> 
>> ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, len);
>> -if (id3v2_extra_meta)
>> +if (id3v2_extra_meta) {
>> ff_id3v2_parse_apic(s, &id3v2_extra_meta);
>> +ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
>> +}
>> ff_id3v2_free_extra_meta(&id3v2_extra_meta);
>> }
>> 
>> diff --git a/libavformat/dsfdec.c b/libavformat/dsfdec.c
>> index 49ca336427..41538fd83c 100644
>> --- a/libavformat/dsfdec.c
>> +++ b/libavformat/dsfdec.c
>> @@ -53,8 +53,10 @@ static void read_id3(AVFormatContext *s, uint64_t id3pos)
>> return;
>> 
>> ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, 0);
>> -if (id3v2_extra_meta)
>> +if (id3v2_extra_meta) {
>> ff_id3v2_parse_apic(s, &id3v2_extra_meta);
>> +ff_id3v2_parse_chapters(s, &id3v2_extra_meta);
>> +}
>> ff_id3v2_free_extra_meta(&id3v2_extra_meta);
>> }
>> 
>> diff --git a/libavformat/id3v2.c b/libavformat/id3v2.c
>> index f15cefee47..6c216ba7a2 100644
>> --- a/libavformat/id3v2.c
>> +++ b/libavformat/id3v2.c
>> @@ -670,59 +670,68 @@ fail:
>> avio_seek(pb, end, SEEK_SET);
>> }
>> 
>> +static void free_chapter(void *obj)
>> +{
>> +ID3v2ExtraMetaCHAP *chap = obj;
>> +av_freep(&chap->element_id);
>> +av_dict_free(&chap->meta);
>> +av_freep(&chap);
>> +}
>> +
>> static void read_chapter(AVFormatContext *s, AVIOContext *pb, int len, const 
>> char *ttag, ID3v2ExtraMeta **extra_meta, int isv34)
>> {
>> -AVRational time_base = {1, 1000};
>> -uint32_t start, end;
>> -AVChapter *chapter;
>> -uint8_t *dst = NULL;
>> int taglen;
>> char tag[5];
>> +ID3v2ExtraMeta *new_extra = NULL;
>> +ID3v2ExtraMetaCHAP *chap  = NUL

[FFmpeg-devel] [PATCH] movenc: add m4b to ipod format extensions

2017-10-29 Thread Lukas Stabe
m4b is commonly used as extension for m4a audiobook files.
The format is exactly the same. The only thing that differs
is the extension.
---
 libavformat/movenc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavformat/movenc.c b/libavformat/movenc.c
index a34987a7dc..a920eb7c8f 100644
--- a/libavformat/movenc.c
+++ b/libavformat/movenc.c
@@ -6664,7 +6664,7 @@ AVOutputFormat ff_ipod_muxer = {
 .name  = "ipod",
 .long_name = NULL_IF_CONFIG_SMALL("iPod H.264 MP4 (MPEG-4 Part 
14)"),
 .mime_type = "video/mp4",
-.extensions= "m4v,m4a",
+.extensions= "m4v,m4a,m4b",
 .priv_data_size= sizeof(MOVMuxContext),
 .audio_codec   = AV_CODEC_ID_AAC,
 .video_codec   = AV_CODEC_ID_H264,
-- 
2.14.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/3] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-05-08 Thread Lukas Rusak
This is V2 of my previous patch. I have worked together with Jorge to get this 
working properly.
We have made it so that AV_PIX_FMT_DRM_PRIME output can be selected by setting 
avctx->pix_fmt.
This allows v4l2 to export the buffer so we can use it for zero-copy. If 
AV_PIX_FMT_DRM_PRIME is
not selected then the standard pixel formats will be used and the buffers will 
not be exported.

---
 libavcodec/v4l2_buffers.c | 228 +++---
 libavcodec/v4l2_buffers.h |   5 +-
 libavcodec/v4l2_context.c |  40 ++-
 libavcodec/v4l2_m2m.c |   4 +-
 libavcodec/v4l2_m2m.h |   3 +
 libavcodec/v4l2_m2m_dec.c |  23 
 6 files changed, 257 insertions(+), 46 deletions(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index aef911f3bb..d715bc6a7c 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -21,6 +21,7 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -29,6 +30,7 @@
 #include 
 #include "libavcodec/avcodec.h"
 #include "libavcodec/internal.h"
+#include "libavutil/hwcontext.h"
 #include "v4l2_context.h"
 #include "v4l2_buffers.h"
 #include "v4l2_m2m.h"
@@ -203,7 +205,65 @@ static enum AVColorTransferCharacteristic 
v4l2_get_color_trc(V4L2Buffer *buf)
 return AVCOL_TRC_UNSPECIFIED;
 }
 
-static void v4l2_free_buffer(void *opaque, uint8_t *unused)
+static uint8_t * v4l2_get_drm_frame(V4L2Buffer *avbuf)
+{
+AVDRMFrameDescriptor *drm_desc = &avbuf->drm_frame;
+AVDRMLayerDescriptor *layer;
+
+/* fill the DRM frame descriptor */
+drm_desc->nb_objects = 1;
+drm_desc->nb_layers = 1;
+
+layer = &drm_desc->layers[0];
+layer->planes[0].object_index = 0;
+layer->planes[0].offset = 0;
+layer->planes[0].pitch = avbuf->plane_info[0].bytesperline;
+
+switch (avbuf->context->av_pix_fmt) {
+case AV_PIX_FMT_NV12:
+case AV_PIX_FMT_NV21:
+
+if (avbuf->num_planes > 1)
+break;
+
+layer->format = avbuf->context->av_pix_fmt == AV_PIX_FMT_NV12 ?
+DRM_FORMAT_NV12 : DRM_FORMAT_NV21;
+layer->nb_planes = 2;
+
+layer->planes[1].object_index = 0;
+layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
+avbuf->context->format.fmt.pix_mp.height;
+layer->planes[1].pitch = avbuf->plane_info[0].bytesperline;
+break;
+
+case AV_PIX_FMT_YUV420P:
+
+if (avbuf->num_planes > 1)
+break;
+
+layer->format = DRM_FORMAT_YUV420;
+layer->nb_planes = 3;
+
+layer->planes[1].object_index = 0;
+layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
+avbuf->context->format.fmt.pix_mp.height;
+layer->planes[1].pitch = avbuf->plane_info[0].bytesperline >> 1;
+
+layer->planes[2].object_index = 0;
+layer->planes[2].offset = layer->planes[1].offset +
+((avbuf->plane_info[0].bytesperline *
+  avbuf->context->format.fmt.pix_mp.height) >> 2);
+layer->planes[2].pitch = avbuf->plane_info[0].bytesperline >> 1;
+break;
+
+default:
+break;
+}
+
+return (uint8_t *) drm_desc;
+}
+
+static void v4l2_free_buffer(void *opaque, uint8_t *data)
 {
 V4L2Buffer* avbuf = opaque;
 V4L2m2mContext *s = buf_to_m2mctx(avbuf);
@@ -227,27 +287,49 @@ static void v4l2_free_buffer(void *opaque, uint8_t 
*unused)
 }
 }
 
-static int v4l2_buf_to_bufref(V4L2Buffer *in, int plane, AVBufferRef **buf)
+static int v4l2_buffer_export_drm(V4L2Buffer* avbuf)
 {
-V4L2m2mContext *s = buf_to_m2mctx(in);
+struct v4l2_exportbuffer expbuf;
+int i, ret;
 
-if (plane >= in->num_planes)
-return AVERROR(EINVAL);
+for (i = 0; i < avbuf->num_planes; i++) {
+memset(&expbuf, 0, sizeof(expbuf));
 
-/* even though most encoders return 0 in data_offset encoding vp8 does 
require this value */
-*buf = av_buffer_create((char *)in->plane_info[plane].mm_addr + 
in->planes[plane].data_offset,
-in->plane_info[plane].length, v4l2_free_buffer, 
in, 0);
-if (!*buf)
-return AVERROR(ENOMEM);
+expbuf.index = avbuf->buf.index;
+expbuf.type = avbuf->buf.type;
+expbuf.plane = i;
+
+ret = ioctl(buf_to_m2mctx(avbuf)->fd, VIDIOC_EXPBUF, &expbuf);
+if (ret < 0)
+return AVERROR(errno);
+
+if (V4L2_TYPE_IS_MULTIPLANAR(avbuf->buf.type)) {
+avbuf->buf.m.planes[i].m.fd = expbuf.fd;
+/* drm frame */
+avbuf->drm_frame.objects[i].size = avbuf->buf.m.planes[i].length;
+avbuf->drm_frame.objects[i].fd = expbuf.fd;
+} else {
+avbuf->buf.m.fd = expbuf.fd;
+/* drm frame */
+avbuf->drm_frame.objects[0].size = avbuf->buf.length;
+avbuf->drm_frame.objects[0].fd = expbuf.fd;
+}
+}
+
+return 0;
+}
+
+static int v4l2_buf_inc

[FFmpeg-devel] [PATCH 3/3] libavcodec: v4l2m2m: fix error handling during buffer init

2018-05-08 Thread Lukas Rusak
From: Jorge Ramirez-Ortiz 

Signed-off-by: Jorge Ramirez-Ortiz 
---
 libavcodec/v4l2_context.c | 19 ---
 libavcodec/v4l2_m2m_dec.c | 11 ---
 2 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/libavcodec/v4l2_context.c b/libavcodec/v4l2_context.c
index 9457fadb1e..fd3161ce2f 100644
--- a/libavcodec/v4l2_context.c
+++ b/libavcodec/v4l2_context.c
@@ -263,6 +263,12 @@ static V4L2Buffer* v4l2_dequeue_v4l2buf(V4L2Context *ctx, 
int timeout)
 /* if we are draining and there are no more capture buffers queued in the 
driver we are done */
 if (!V4L2_TYPE_IS_OUTPUT(ctx->type) && ctx_to_m2mctx(ctx)->draining) {
 for (i = 0; i < ctx->num_buffers; i++) {
+/* catpture buffer initialization happens during decode hence
+ * detection happens at runtime
+ */
+if (!ctx->buffers)
+break;
+
 if (ctx->buffers[i].status == V4L2BUF_IN_DRIVER)
 goto start;
 }
@@ -724,9 +730,8 @@ int ff_v4l2_context_init(V4L2Context* ctx)
 ctx->buffers[i].context = ctx;
 ret = ff_v4l2_buffer_initialize(&ctx->buffers[i], i);
 if (ret < 0) {
-av_log(logger(ctx), AV_LOG_ERROR, "%s buffer initialization 
(%s)\n", ctx->name, av_err2str(ret));
-av_free(ctx->buffers);
-return ret;
+av_log(logger(ctx), AV_LOG_ERROR, "%s buffer[%d] initialization 
(%s)\n", ctx->name, i, av_err2str(ret));
+goto error;
 }
 }
 
@@ -739,4 +744,12 @@ int ff_v4l2_context_init(V4L2Context* ctx)
 V4L2_TYPE_IS_MULTIPLANAR(ctx->type) ? 
ctx->format.fmt.pix_mp.plane_fmt[0].bytesperline : 
ctx->format.fmt.pix.bytesperline);
 
 return 0;
+
+error:
+v4l2_release_buffers(ctx);
+
+av_free(ctx->buffers);
+ctx->buffers = NULL;
+
+return ret;
 }
diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index 2b33badb08..1bfd11e216 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -92,8 +92,8 @@ static int v4l2_try_start(AVCodecContext *avctx)
 if (!capture->buffers) {
 ret = ff_v4l2_context_init(capture);
 if (ret) {
-av_log(avctx, AV_LOG_DEBUG, "can't request output buffers\n");
-return ret;
+av_log(avctx, AV_LOG_ERROR, "can't request capture buffers\n");
+return AVERROR(ENOMEM);
 }
 }
 
@@ -155,8 +155,13 @@ static int v4l2_receive_frame(AVCodecContext *avctx, 
AVFrame *frame)
 
 if (avpkt.size) {
 ret = v4l2_try_start(avctx);
-if (ret)
+if (ret) {
+/* cant recover */
+if (ret == AVERROR(ENOMEM))
+return ret;
+
 return 0;
+}
 }
 
 dequeue:
-- 
2.17.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/3] libavcodec: v4l2m2m: fix indentation and add M2MDEC_CLASS

2018-05-08 Thread Lukas Rusak
This is just some formatting that is taken from the rkmpp decoder. I find that
this make is more readable.

---
 libavcodec/v4l2_m2m_dec.c | 44 ---
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index bca45be148..ed5193ecc1 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -202,28 +202,30 @@ static const AVOption options[] = {
 { NULL},
 };
 
+#define M2MDEC_CLASS(NAME) \
+static const AVClass v4l2_m2m_ ## NAME ## _dec_class = { \
+.class_name = #NAME "_v4l2_m2m_decoder", \
+.item_name  = av_default_item_name, \
+.option = options, \
+.version= LIBAVUTIL_VERSION_INT, \
+};
+
 #define M2MDEC(NAME, LONGNAME, CODEC, bsf_name) \
-static const AVClass v4l2_m2m_ ## NAME ## _dec_class = {\
-.class_name = #NAME "_v4l2_m2m_decoder",\
-.item_name  = av_default_item_name,\
-.option = options,\
-.version= LIBAVUTIL_VERSION_INT,\
-};\
-\
-AVCodec ff_ ## NAME ## _v4l2m2m_decoder = { \
-.name   = #NAME "_v4l2m2m" ,\
-.long_name  = NULL_IF_CONFIG_SMALL("V4L2 mem2mem " LONGNAME " decoder 
wrapper"),\
-.type   = AVMEDIA_TYPE_VIDEO,\
-.id = CODEC ,\
-.priv_data_size = sizeof(V4L2m2mPriv),\
-.priv_class = &v4l2_m2m_ ## NAME ## _dec_class,\
-.init   = v4l2_decode_init,\
-.receive_frame  = v4l2_receive_frame,\
-.close  = ff_v4l2_m2m_codec_end,\
-.bsfs   = bsf_name, \
-.capabilities   = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DELAY, \
-.wrapper_name   = "v4l2m2m", \
-};
+M2MDEC_CLASS(NAME) \
+AVCodec ff_ ## NAME ## _v4l2m2m_decoder = { \
+.name   = #NAME "_v4l2m2m" , \
+.long_name  = NULL_IF_CONFIG_SMALL("V4L2 mem2mem " LONGNAME " 
decoder wrapper"), \
+.type   = AVMEDIA_TYPE_VIDEO, \
+.id = CODEC , \
+.priv_data_size = sizeof(V4L2m2mPriv), \
+.priv_class = &v4l2_m2m_ ## NAME ## _dec_class, \
+.init   = v4l2_decode_init, \
+.receive_frame  = v4l2_receive_frame, \
+.close  = ff_v4l2_m2m_codec_end, \
+.bsfs   = bsf_name, \
+.capabilities   = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DELAY, \
+.wrapper_name   = "v4l2m2m", \
+};
 
 M2MDEC(h264,  "H.264", AV_CODEC_ID_H264,   "h264_mp4toannexb");
 M2MDEC(hevc,  "HEVC",  AV_CODEC_ID_HEVC,   "hevc_mp4toannexb");
-- 
2.17.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/3] libavcodec/v4l2_buffers: return int64_t in v4l2_get_pts

2018-01-08 Thread Lukas Rusak
v4l2_pts is type int64_t we should return that instead of uint64_t

---
 libavcodec/v4l2_buffers.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index ba70c5d14b..fdafe7edca 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -62,7 +62,7 @@ static inline void v4l2_set_pts(V4L2Buffer *out, int64_t pts)
 out->buf.timestamp.tv_sec = v4l2_pts / USEC_PER_SEC;
 }
 
-static inline uint64_t v4l2_get_pts(V4L2Buffer *avbuf)
+static inline int64_t v4l2_get_pts(V4L2Buffer *avbuf)
 {
 V4L2m2mContext *s = buf_to_m2mctx(avbuf);
 AVRational v4l2_timebase = { 1, USEC_PER_SEC };
-- 
2.14.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/3] libavcodec/v4l2_buffers: check for valid pts value

2018-01-08 Thread Lukas Rusak
we check for a valid pts in v4l2_set_pts so we should do the same here

---
 libavcodec/v4l2_buffers.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index fdafe7edca..5337f6f287 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -71,7 +71,10 @@ static inline int64_t v4l2_get_pts(V4L2Buffer *avbuf)
 /* convert pts back to encoder timebase */
 v4l2_pts = avbuf->buf.timestamp.tv_sec * USEC_PER_SEC + 
avbuf->buf.timestamp.tv_usec;
 
-return av_rescale_q(v4l2_pts, v4l2_timebase, s->avctx->time_base);
+if (v4l2_pts == 0)
+return AV_NOPTS_VALUE;
+else
+return av_rescale_q(v4l2_pts, v4l2_timebase, s->avctx->time_base);
 }
 
 static enum AVColorPrimaries v4l2_get_color_primaries(V4L2Buffer *buf)
-- 
2.14.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/3] libavcodec/v4l2_m2m_dec: set default time base

2018-01-08 Thread Lukas Rusak
This default time base should be set in order for ffmpeg to rescale the 
timebase in v4l2_get_pts and v4l2_set_pts

---
 libavcodec/v4l2_m2m_dec.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index 8308613978..4de091a011 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -177,6 +177,8 @@ static av_cold int v4l2_decode_init(AVCodecContext *avctx)
 capture->av_codec_id = AV_CODEC_ID_RAWVIDEO;
 capture->av_pix_fmt = avctx->pix_fmt;
 
+avctx->time_base = AV_TIME_BASE_Q;
+
 ret = ff_v4l2_m2m_codec_init(avctx);
 if (ret) {
 av_log(avctx, AV_LOG_ERROR, "can't configure decoder\n");
-- 
2.14.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 0/3] v4l2: some small fixes

2018-01-08 Thread Lukas Rusak
Some small fixes that I have done while working with v4l2

Lukas Rusak (3):
  libavcodec/v4l2_buffers: return int64_t in v4l2_get_pts
  libavcodec/v4l2_buffers: check for valid pts value
  libavcodec/v4l2_m2m_dec: set default time base

 libavcodec/v4l2_buffers.c | 7 +--
 libavcodec/v4l2_m2m_dec.c | 2 ++
 2 files changed, 7 insertions(+), 2 deletions(-)

-- 
2.14.3

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH][RFC] libavcodec/v4l2_m2m_dec: output AVDRMFrameDescriptor

2018-01-08 Thread Lukas Rusak
This is a patch I have been testing for a while in combination with kodi's 
drmprime renderer

I have been testing this with i.MX6 and Dragonboard 410c. So it covers single 
and multiplanar formats.

I'm looking to get some comments on how to integrate this properly so that if 
we request
AV_PIX_FMT_DRM_PRIME we can use that otherwise just use the default buffers.

I basically followed how it was done in the rockchip decoder.

---
 libavcodec/v4l2_buffers.c | 97 +++
 libavcodec/v4l2_buffers.h |  1 +
 libavcodec/v4l2_m2m_dec.c | 45 --
 3 files changed, 97 insertions(+), 46 deletions(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index ba70c5d14b..065074c944 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -21,6 +21,7 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -29,6 +30,8 @@
 #include 
 #include "libavcodec/avcodec.h"
 #include "libavcodec/internal.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_drm.h"
 #include "v4l2_context.h"
 #include "v4l2_buffers.h"
 #include "v4l2_m2m.h"
@@ -202,11 +205,14 @@ static enum AVColorTransferCharacteristic 
v4l2_get_color_trc(V4L2Buffer *buf)
 return AVCOL_TRC_UNSPECIFIED;
 }
 
-static void v4l2_free_buffer(void *opaque, uint8_t *unused)
+static void v4l2_free_buffer(void *opaque, uint8_t *data)
 {
+AVDRMFrameDescriptor *desc = (AVDRMFrameDescriptor *)data;
 V4L2Buffer* avbuf = opaque;
 V4L2m2mContext *s = buf_to_m2mctx(avbuf);
 
+av_free(desc);
+
 atomic_fetch_sub_explicit(&s->refcount, 1, memory_order_acq_rel);
 if (s->reinit) {
 if (!atomic_load(&s->refcount))
@@ -289,35 +295,15 @@ int ff_v4l2_buffer_avframe_to_buf(const AVFrame *frame, 
V4L2Buffer* out)
 int ff_v4l2_buffer_buf_to_avframe(AVFrame *frame, V4L2Buffer *avbuf)
 {
 V4L2m2mContext *s = buf_to_m2mctx(avbuf);
-int i, ret;
+AVDRMFrameDescriptor *desc = NULL;
+AVDRMLayerDescriptor *layer = NULL;
+int ret;
 
 av_frame_unref(frame);
 
-/* 1. get references to the actual data */
-for (i = 0; i < avbuf->num_planes; i++) {
-ret = v4l2_buf_to_bufref(avbuf, i, &frame->buf[i]);
-if (ret)
-return ret;
-
-frame->linesize[i] = avbuf->plane_info[i].bytesperline;
-frame->data[i] = frame->buf[i]->data;
-}
-
-/* 1.1 fixup special cases */
-switch (avbuf->context->av_pix_fmt) {
-case AV_PIX_FMT_NV12:
-if (avbuf->num_planes > 1)
-break;
-frame->linesize[1] = avbuf->plane_info[0].bytesperline;
-frame->data[1] = frame->buf[0]->data + 
avbuf->plane_info[0].bytesperline * avbuf->context->format.fmt.pix_mp.height;
-break;
-default:
-break;
-}
-
 /* 2. get frame information */
 frame->key_frame = !!(avbuf->buf.flags & V4L2_BUF_FLAG_KEYFRAME);
-frame->format = avbuf->context->av_pix_fmt;
+frame->format = AV_PIX_FMT_DRM_PRIME;
 frame->color_primaries = v4l2_get_color_primaries(avbuf);
 frame->colorspace = v4l2_get_color_space(avbuf);
 frame->color_range = v4l2_get_color_range(avbuf);
@@ -328,6 +314,42 @@ int ff_v4l2_buffer_buf_to_avframe(AVFrame *frame, 
V4L2Buffer *avbuf)
 frame->height = s->output.height;
 frame->width = s->output.width;
 
+ret = ff_v4l2_buffer_export(avbuf);
+if (ret < 0) {
+return AVERROR(errno);
+}
+
+desc = av_mallocz(sizeof(AVDRMFrameDescriptor));
+if (!desc) {
+return AVERROR(ENOMEM);
+}
+
+desc->nb_objects = 1;
+
+if (V4L2_TYPE_IS_MULTIPLANAR(avbuf->buf.type)) {
+desc->objects[0].fd = avbuf->buf.m.planes[0].m.fd;
+desc->objects[0].size = avbuf->buf.m.planes[0].length;
+} else {
+desc->objects[0].fd = avbuf->buf.m.fd;
+desc->objects[0].size = avbuf->buf.length;
+}
+
+desc->nb_layers = 1;
+layer = &desc->layers[0];
+layer->format = DRM_FORMAT_NV12;
+layer->nb_planes = 2;
+
+layer->planes[0].object_index = 0;
+layer->planes[0].offset = 0;
+layer->planes[0].pitch = avbuf->plane_info[0].bytesperline;
+
+layer->planes[1].object_index = 0;
+layer->planes[1].offset = layer->planes[0].pitch * 
avbuf->context->format.fmt.pix_mp.height;
+layer->planes[1].pitch = avbuf->plane_info[0].bytesperline;
+
+frame->data[0] = (uint8_t *)desc;
+frame->buf[0] = av_buffer_create((uint8_t *)desc, sizeof(*desc), 
v4l2_free_buffer, avbuf, AV_BUFFER_FLAG_READONLY);
+
 /* 3. report errors upstream */
 if (avbuf->buf.flags & V4L2_BUF_FLAG_ERROR) {
 av_log(logger(avbuf), AV_LOG_ERROR, "%s: driver decode error\n", 
avbuf->context->name);
@@ -461,3 +483,28 @@ int ff_v4l2_buffer_enqueue(V4L2Buffer* avbuf)
 
 return 0;
 }
+
+int ff_v4l2_buffer_export(V4L2Buffer* avbuf)
+{
+int i, ret;
+
+for (i = 0; i < avbuf->num_planes; i++) {
+
+str

Re: [FFmpeg-devel] [PATCH 2/3] libavcodec/v4l2_buffers: check for valid pts value

2018-01-08 Thread Lukas Rusak
Hmm ok, disregard then.
On Mon, Jan 8, 2018 at 3:53 PM wm4  wrote:

> On Mon,  8 Jan 2018 15:27:38 -0800
> Lukas Rusak  wrote:
>
> > we check for a valid pts in v4l2_set_pts so we should do the same here
> >
> > ---
> >  libavcodec/v4l2_buffers.c | 5 -
> >  1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
> > index fdafe7edca..5337f6f287 100644
> > --- a/libavcodec/v4l2_buffers.c
> > +++ b/libavcodec/v4l2_buffers.c
> > @@ -71,7 +71,10 @@ static inline int64_t v4l2_get_pts(V4L2Buffer *avbuf)
> >  /* convert pts back to encoder timebase */
> >  v4l2_pts = avbuf->buf.timestamp.tv_sec * USEC_PER_SEC +
> avbuf->buf.timestamp.tv_usec;
> >
> > -return av_rescale_q(v4l2_pts, v4l2_timebase, s->avctx->time_base);
> > +if (v4l2_pts == 0)
> > +return AV_NOPTS_VALUE;
> > +else
> > +return av_rescale_q(v4l2_pts, v4l2_timebase,
> s->avctx->time_base);
> >  }
> >
> >  static enum AVColorPrimaries v4l2_get_color_primaries(V4L2Buffer *buf)
>
> So, what about pts=0, which is valid? You shouldn't just turn 0 into
> AV_NOPTS_VALUE.
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 3/3] libavcodec/v4l2_m2m_dec: set default time base

2018-01-08 Thread Lukas Rusak
I'm not really sure what to do then.

Should I just replace time_base with pkt_timebase instead?

Or

Should I just remove the time base rescaling completely in v4l2_set_pts and
v4l2_get_pts as it seems to be 1:1 anyways.
On Mon, Jan 8, 2018 at 3:45 PM wm4  wrote:

> On Mon,  8 Jan 2018 15:27:39 -0800
> Lukas Rusak  wrote:
>
> > This default time base should be set in order for ffmpeg to rescale the
> timebase in v4l2_get_pts and v4l2_set_pts
> >
> > ---
> >  libavcodec/v4l2_m2m_dec.c | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
> > index 8308613978..4de091a011 100644
> > --- a/libavcodec/v4l2_m2m_dec.c
> > +++ b/libavcodec/v4l2_m2m_dec.c
> > @@ -177,6 +177,8 @@ static av_cold int v4l2_decode_init(AVCodecContext
> *avctx)
> >  capture->av_codec_id = AV_CODEC_ID_RAWVIDEO;
> >  capture->av_pix_fmt = avctx->pix_fmt;
> >
> > +avctx->time_base = AV_TIME_BASE_Q;
> > +
> >  ret = ff_v4l2_m2m_codec_init(avctx);
> >  if (ret) {
> >  av_log(avctx, AV_LOG_ERROR, "can't configure decoder\n");
>
> Decoders in FFmpeg don't really have a concept of a timebase. If they
> do, they should not use avctx->time_base, but avctx->pkt_timebase.
>
> (I don't think avctx->pkt_timebase even needs to be set - API users
> normally expect that the decoder simply passes through timestamps,
> regardless of timebase. But if pkt_timebase is set, it should be the
> same timebase that AVPacket uses, and the output AVFrames must use the
> same timebase.
>
> avctx->time_base doesn't really mean anything for decoding. There is an
> obscure "other" use of it that has been deprecated, and the replacement
> has the same obscure use (see doxygen).
> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH initialization

2022-09-04 Thread Lukas Fellechner
Andreas Rheinhardt andreas.rheinhardt at outlook.com
Wed Aug 31 05:54:12 EEST 2022
>
> > +#if HAVE_THREADS
> > +
> > +struct work_pool_data
> > +{
> > +AVFormatContext *ctx;
> > +struct representation *pls;
> > +struct representation *common_pls;
> > +pthread_mutex_t *common_mutex;
> > +pthread_cond_t *common_condition;
> > +int is_common;
> > +int is_started;
> > +int result;
> > +};
> > +
> > +struct thread_data
>
> This is against our naming conventions: CamelCase for struct tags and
> typedefs, lowercase names with underscore for variable names.

In the code files I looked at, CamelCase is only used for typedef structs.
All structs without typedef are lower case with underscores, so I aligned
with that, originally.

I will make this a typedef struct and use CamelCase for next patch.

> > +static int init_streams_multithreaded(AVFormatContext *s, int nstreams, 
> > int threads)
> > +{
> > +DASHContext *c = s->priv_data;
> > +int ret = 0;
> > +int stream_index = 0;
> > +int i;
>
> We allow "for (int i = 0;"

Oh, I did not know that, and I did not see it being used here anywhere.
Will use in next patch, it's my preferred style, actually.

> > +
> > +// alloc data
> > +struct work_pool_data *init_data = (struct 
> > work_pool_data*)av_mallocz(sizeof(struct work_pool_data) * nstreams);
> > +if (!init_data)
> > +return AVERROR(ENOMEM);
> > +
> > +struct thread_data *thread_data = (struct 
> > thread_data*)av_mallocz(sizeof(struct thread_data) * threads);
> > +if (!thread_data)
> > +return AVERROR(ENOMEM);
>
> 1. init_data leaks here on error.
> 2. In fact, it seems to me that both init_data and thread_data are
> nowhere freed.

True, I must have lost the av_free call at some point.

> > +// init work pool data
> > +struct work_pool_data* current_data = init_data;
> > +
> > +for (i = 0; i < c->n_videos; i++) {
> > +create_work_pool_data(s, stream_index, c->videos[i],
> > +c->is_init_section_common_video ? c->videos[0] : NULL,
> > +current_data, &common_video_mutex, &common_video_cond);
> > +
> > +stream_index++;
> > +current_data++;
> > +}
> > +
> > +for (i = 0; i < c->n_audios; i++) {
> > +create_work_pool_data(s, stream_index, c->audios[i],
> > +c->is_init_section_common_audio ? c->audios[0] : NULL,
> > +current_data, &common_audio_mutex, &common_audio_cond);
> > +
> > +stream_index++;
> > +current_data++;
> > +}
> > +
> > +for (i = 0; i < c->n_subtitles; i++) {
> > +create_work_pool_data(s, stream_index, c->subtitles[i],
> > +c->is_init_section_common_subtitle ? c->subtitles[0] : NULL,
> > +current_data, &common_subtitle_mutex, &common_subtitle_cond);
> > +
> > +stream_index++;
> > +current_data++;
> > +}
>
> This is very repetitive.

Will improve for next patch.

> 1. We actually have an API to process multiple tasks by different
> threads: Look at libavutil/slicethread.h. Why can't you reuse that?
> 2. In case initialization of one of the conditions/mutexes fails, you
> are nevertheless destroying them; you are even destroying completely
> uninitialized mutexes. This is undefined behaviour. Checking the result
> of it does not fix this.
>
> - Andreas

1. The slicethread implementation is pretty hard to understand.
I was not sure if it can be used for normal parallelization, because
it looked more like some kind of thread pool for continuous data
processing. But after taking a second look, I think I can use it here.
I will try and see if it works well.

2. I was not aware that this is undefined behavior. Will switch to
PTHREAD_MUTEX_INITIALIZER and PTHREAD_COND_INITIALIZER macros,
which return a safely initialized mutex/cond.

I also noticed one cross-thread issue that I will solve in the next patch.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH initialization

2022-09-05 Thread Lukas Fellechner
> Gesendet: Montag, 05. September 2022 um 00:50 Uhr
> Von: "Andreas Rheinhardt" 
> An: ffmpeg-devel@ffmpeg.org
> Betreff: Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH 
> initialization
> Lukas Fellechner:
> > Andreas Rheinhardt andreas.rheinhardt at outlook.com
> > Wed Aug 31 05:54:12 EEST 2022
> >>
> >
> >> 1. We actually have an API to process multiple tasks by different
> >> threads: Look at libavutil/slicethread.h. Why can't you reuse that?
> >> 2. In case initialization of one of the conditions/mutexes fails, you
> >> are nevertheless destroying them; you are even destroying completely
> >> uninitialized mutexes. This is undefined behaviour. Checking the result
> >> of it does not fix this.
> >>
> >> - Andreas
> >
> > 1. The slicethread implementation is pretty hard to understand.
> > I was not sure if it can be used for normal parallelization, because
> > it looked more like some kind of thread pool for continuous data
> > processing. But after taking a second look, I think I can use it here.
> > I will try and see if it works well.
> >
> > 2. I was not aware that this is undefined behavior. Will switch to
> > PTHREAD_MUTEX_INITIALIZER and PTHREAD_COND_INITIALIZER macros,
> > which return a safely initialized mutex/cond.
> >
> 
> "The behavior is undefined if the value specified by the mutex argument
> to pthread_mutex_destroy() does not refer to an initialized mutex."
> (From
> https://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_destroy.html)
> 
> Furthermore: "In cases where default mutex attributes are appropriate,
> the macro PTHREAD_MUTEX_INITIALIZER can be used to initialize mutexes.
> The effect shall be equivalent to dynamic initialization by a call to
> pthread_mutex_init() with parameter attr specified as NULL, except that
> no error checks are performed." The last sentence sounds as if one would
> then have to check mutex locking.
> 
> Moreover, older pthread standards did not allow to use
> PTHREAD_MUTEX_INITIALIZER with non-static mutexes, so I don't know
> whether we can use that. Also our pthreads-wrapper on top of
> OS/2-threads does not provide PTHREAD_COND_INITIALIZER (which is used
> nowhere in the codebase).

I missed that detail about the initializer macro. Thank you for clearing
that up.

After looking more into threads implementation in ffmpeg, I wonder if I
really need to check any results of init/destroy or other functions.
In slicethreads.c, there is zero checking on any of the lock functions.
The pthreads-based implementation does internally check the results of all
function calls and calls abort() in case of errors ("strict_" wrappers).
The Win32 implementation uses SRW locks which cannot even return errors. 
And the OS2 implementation returns 0 on all calls as well.

So right now, I think that I should continue with normal _init() calls
(no macros) and drop all error checking, just like slicethread does.
Are you fine with that approach?
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH initialization

2022-09-05 Thread Lukas Fellechner
>Gesendet: Montag, 05. September 2022 um 12:45 Uhr
>Von: "Andreas Rheinhardt" 
>An: ffmpeg-devel@ffmpeg.org
>Betreff: Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH 
>initialization
>Lukas Fellechner:
>>> Gesendet: Montag, 05. September 2022 um 00:50 Uhr
>>> Von: "Andreas Rheinhardt" 
>>> An: ffmpeg-devel@ffmpeg.org
>>> Betreff: Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH 
>>> initialization
>>> Lukas Fellechner:
>>>> Andreas Rheinhardt andreas.rheinhardt at outlook.com
>>>> Wed Aug 31 05:54:12 EEST 2022
>>>>>
>>>>
>>>>> 1. We actually have an API to process multiple tasks by different
>>>>> threads: Look at libavutil/slicethread.h. Why can't you reuse that?
>>>>> 2. In case initialization of one of the conditions/mutexes fails, you
>>>>> are nevertheless destroying them; you are even destroying completely
>>>>> uninitialized mutexes. This is undefined behaviour. Checking the result
>>>>> of it does not fix this.
>>>>>
>>>>> - Andreas
>>>>
>>>> 1. The slicethread implementation is pretty hard to understand.
>>>> I was not sure if it can be used for normal parallelization, because
>>>> it looked more like some kind of thread pool for continuous data
>>>> processing. But after taking a second look, I think I can use it here.
>>>> I will try and see if it works well.
>>>>
>>>> 2. I was not aware that this is undefined behavior. Will switch to
>>>> PTHREAD_MUTEX_INITIALIZER and PTHREAD_COND_INITIALIZER macros,
>>>> which return a safely initialized mutex/cond.
>>>>
>>>
>>> "The behavior is undefined if the value specified by the mutex argument
>>> to pthread_mutex_destroy() does not refer to an initialized mutex."
>>> (From
>>> https://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_destroy.html)
>>>
>>> Furthermore: "In cases where default mutex attributes are appropriate,
>>> the macro PTHREAD_MUTEX_INITIALIZER can be used to initialize mutexes.
>>> The effect shall be equivalent to dynamic initialization by a call to
>>> pthread_mutex_init() with parameter attr specified as NULL, except that
>>> no error checks are performed." The last sentence sounds as if one would
>>> then have to check mutex locking.
>>>
>>> Moreover, older pthread standards did not allow to use
>>> PTHREAD_MUTEX_INITIALIZER with non-static mutexes, so I don't know
>>> whether we can use that. Also our pthreads-wrapper on top of
>>> OS/2-threads does not provide PTHREAD_COND_INITIALIZER (which is used
>>> nowhere in the codebase).
>>
>> I missed that detail about the initializer macro. Thank you for clearing
>> that up.
>>
>> After looking more into threads implementation in ffmpeg, I wonder if I
>> really need to check any results of init/destroy or other functions.
>> In slicethreads.c, there is zero checking on any of the lock functions.
>> The pthreads-based implementation does internally check the results of all
>> function calls and calls abort() in case of errors ("strict_" wrappers).
>> The Win32 implementation uses SRW locks which cannot even return errors.
>> And the OS2 implementation returns 0 on all calls as well.
>>
>> So right now, I think that I should continue with normal _init() calls
>> (no macros) and drop all error checking, just like slicethread does.
>> Are you fine with that approach?
>
> Zero checking is our old approach; the new approach checks for errors
> and ensures that only mutexes/condition variables that have been
> properly initialized are destroyed. See ff_pthread_init/free in
> libavcodec/pthread.c (you can't use this in libavformat, because these
> functions are local to libavcodec).
>
> - Andreas

I see. I will try to do clean error checking then.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [RFC] d3dva security hw+threads

2022-09-05 Thread Lukas Fellechner
> Gesendet: Freitag, 02. September 2022 um 01:46 Uhr
> Von: "Timo Rothenpieler" 
> An: ffmpeg-devel@ffmpeg.org
> Betreff: Re: [FFmpeg-devel] [RFC] d3dva security hw+threads
> On 02.09.2022 01:32, Michael Niedermayer wrote:
>> Hi all
>>
>> Theres a use after free issue in H.264 Decoding on d3d11va with multiple 
>> threads
>> I dont have the hardware/platform nor do i know the hw decoding code so i 
>> made
>> no attempt to fix this beyond asking others to ...
>
> hwaccel with multiple threads being broken is not exactly a surprise.
> So we could just disable that, and always have it be one single thread?

I am using FFmpeg as decoder library in a video player, either with sw decoding
or d3d11va. Originally, I had threads set to auto in all cases. While it worked
for some codecs such as H.264, it produced garbage output for other codecs.
I think VP8/VP9 are severly broken with threading+d3d11va. So I had to manually
set threads to 1, if using hwaccel. Only then I had stable output for all 
codecs.

Using multithreading together with hwaccel really does not make any sense.
All the work is done by the GPU, not by the CPU. And the GPU will parallelize
internally where possible. Adding CPU multithreading will not help at all.

IMHO the best would be to completely disable threading when hwaccel is in use.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 0/4] lavf/dashdec: Multithreaded DASH initialization

2022-09-05 Thread Lukas Fellechner
Initializing DASH streams is currently slow, because each individual
stream is opened and probed sequentially. With DASH streams often
having somewhere between 10-20 streams, this can easily take up to
half a minute on slow connections.

This patch adds an "init_threads" option, specifying the max number
of threads to use. Multiple worker threads are spun up to massively
bring down init times.
In-Reply-To: 
trinity-36a68f08-f239-4450-b893-af6bfa783181-1661031307501@3c-app-gmx-bs35


___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 3/4] lavf/dashdec: Prevent cross-thread avio_opts modification

2022-09-05 Thread Lukas Fellechner
open_url modifies the shared avio_opts dict (update cookies).
This can cause problems during multithreaded initialization.
To prevent this, I take a copy of avio_opts, use that in open_url,
and copy the updated dict back afterwards.
---
 libavformat/dashdec.c | 34 --
 1 file changed, 32 insertions(+), 2 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 0532e2c918..19e657d836 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -156,6 +156,11 @@ typedef struct DASHContext {

 int init_threads;

+#if HAVE_THREADS
+/* Set during parallel initialization, to allow locking of avio_opts */
+pthread_mutex_t *init_mutex;
+#endif
+
 /* Flags for init section*/
 int is_init_section_common_video;
 int is_init_section_common_audio;
@@ -1699,7 +1704,32 @@ static int open_input(DASHContext *c, struct 
representation *pls, struct fragmen
 ff_make_absolute_url(url, c->max_url_size, c->base_url, seg->url);
 av_log(pls->parent, AV_LOG_VERBOSE, "DASH request for url '%s', offset 
%"PRId64"\n",
url, seg->url_offset);
-ret = open_url(pls->parent, &pls->input, url, &c->avio_opts, opts, NULL);
+
+AVDictionary *avio_opts = c->avio_opts;
+
+#if HAVE_THREADS
+// If we are doing parallel initialization, take a snapshot of the 
avio_opts,
+// and copy the modified dictionary ("cookies" updated) back, after the 
url is opened.
+if (c->init_mutex) {
+pthread_mutex_lock(c->init_mutex);
+avio_opts = NULL;
+ret = av_dict_copy(&avio_opts, c->avio_opts, 0);
+pthread_mutex_unlock(c->init_mutex);
+if (ret < 0)
+goto cleanup;
+}
+#endif
+
+ret = open_url(pls->parent, &pls->input, url, &avio_opts, opts, NULL);
+
+#if HAVE_THREADS
+if (c->init_mutex) {
+pthread_mutex_lock(c->init_mutex);
+av_dict_free(&c->avio_opts);
+c->avio_opts = avio_opts;
+pthread_mutex_unlock(c->init_mutex);
+}
+#endif

 cleanup:
 av_free(url);
@@ -2192,7 +,7 @@ static int init_streams_multithreaded(AVFormatContext *s, 
int nstreams, int thre
 if (!avpriv_slicethread_create(&slice_thread, (void*)work_pool, 
&thread_worker, NULL, threads)) {
 av_free(work_pool);
 return AVERROR(ENOMEM);
-}
+}

 // alloc mutex and conditions
 c->init_mutex = create_mutex();
--
2.28.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 2/4] lavf/dashdec: Multithreaded DASH initialization

2022-09-05 Thread Lukas Fellechner
This patch adds an "init_threads" option, specifying the max
number of threads to use. Multiple worker threads are spun up
to massively bring down init times.
---
 libavformat/dashdec.c | 286 +-
 1 file changed, 285 insertions(+), 1 deletion(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index e82da45e43..0532e2c918 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -24,6 +24,8 @@
 #include "libavutil/opt.h"
 #include "libavutil/time.h"
 #include "libavutil/parseutils.h"
+#include "libavutil/thread.h"
+#include "libavutil/slicethread.h"
 #include "internal.h"
 #include "avio_internal.h"
 #include "dash.h"
@@ -152,6 +154,8 @@ typedef struct DASHContext {
 int max_url_size;
 char *cenc_decryption_key;

+int init_threads;
+
 /* Flags for init section*/
 int is_init_section_common_video;
 int is_init_section_common_audio;
@@ -2033,6 +2037,265 @@ static void move_metadata(AVStream *st, const char 
*key, char **value)
 }
 }

+#if HAVE_THREADS
+
+typedef struct WorkPoolData
+{
+AVFormatContext *ctx;
+struct representation *pls;
+struct representation *common_pls;
+pthread_mutex_t *common_mutex;
+pthread_cond_t *common_condition;
+int is_common;
+int is_started;
+int result;
+} WorkPoolData;
+
+static void thread_worker(void *priv, int jobnr, int threadnr, int nb_jobs, 
int nb_threads)
+{
+WorkPoolData *work_pool = (WorkPoolData*)priv;
+WorkPoolData *data = work_pool + jobnr;
+int ret;
+
+// if we are common section provider, init and signal
+if (data->is_common) {
+data->pls->parent = data->ctx;
+ret = update_init_section(data->pls);
+if (ret < 0) {
+pthread_cond_signal(data->common_condition);
+goto end;
+}
+else
+ret = AVERROR(pthread_cond_signal(data->common_condition));
+}
+
+// if we depend on common section provider, wait for signal and copy
+if (data->common_pls) {
+ret = AVERROR(pthread_cond_wait(data->common_condition, 
data->common_mutex));
+if (ret < 0)
+goto end;
+
+if (!data->common_pls->init_sec_buf) {
+goto end;
+ret = AVERROR(EFAULT);
+}
+
+ret = copy_init_section(data->pls, data->common_pls);
+if (ret < 0)
+goto end;
+}
+
+ret = begin_open_demux_for_component(data->ctx, data->pls);
+if (ret < 0)
+goto end;
+
+end:
+data->result = ret;
+}
+
+static void create_work_pool_data(AVFormatContext *ctx, int *stream_index,
+struct representation **streams, int num_streams, int 
is_init_section_common,
+WorkPoolData *work_pool, pthread_mutex_t* common_mutex,
+pthread_cond_t* common_condition)
+{
+work_pool += *stream_index;
+
+for (int i = 0; i < num_streams; i++) {
+work_pool->ctx = ctx;
+work_pool->pls = streams[i];
+work_pool->pls->stream_index = *stream_index;
+work_pool->common_condition = common_condition;
+work_pool->common_mutex = common_mutex;
+work_pool->result = -1;
+
+if (is_init_section_common) {
+if (i == 0)
+work_pool->is_common = 1;
+else
+work_pool->common_pls = streams[0];
+}
+
+work_pool++;
+*stream_index = *stream_index + 1;
+}
+}
+
+static pthread_mutex_t* create_mutex()
+{
+pthread_mutex_t* mutex = 
(pthread_mutex_t*)av_malloc(sizeof(pthread_mutex_t));
+if (!mutex)
+return NULL;
+
+if (pthread_mutex_init(mutex, NULL)) {
+av_free(mutex);
+return NULL;
+}
+
+return mutex;
+}
+
+static int free_mutex(pthread_mutex_t **mutex)
+{
+int ret = 0;
+if (*mutex) {
+ret = pthread_mutex_destroy(*mutex);
+av_free(*mutex);
+*mutex = NULL;
+}
+return ret;
+}
+
+static pthread_cond_t* create_cond()
+{
+pthread_cond_t* cond = (pthread_cond_t*)av_malloc(sizeof(pthread_cond_t));
+if (!cond)
+return NULL;
+
+if (pthread_cond_init(cond, NULL)) {
+av_free(cond);
+return NULL;
+}
+
+return cond;
+}
+
+static int free_cond(pthread_cond_t **cond)
+{
+int ret = 0;
+if (*cond) {
+ret = pthread_cond_destroy(*cond);
+av_free(*cond);
+*cond = NULL;
+}
+return ret;
+}
+
+static int init_streams_multithreaded(AVFormatContext *s, int nstreams, int 
threads)
+{
+DASHContext *c = s->priv_data;
+int ret = 0;
+int stream_index = 0;
+AVSliceThread *slice_thread;
+
+// we need to cleanup even in case of errors,
+// so we need to store results of run and cleanup phase
+int initResult = 0;
+int runResult = 0;
+int cleanupResult = 0;
+
+// alloc data
+WorkPoolData *work_pool = (WorkPoolData*)av_mallocz(
+sizeof(WorkPoolData) * nstreams);
+if (!work_pool)
+return AVERROR(ENOMEM);
+
+if (

[FFmpeg-devel] [PATCH v4 1/4] lavf/dashdec: Prepare DASH decoder for multithreading

2022-09-05 Thread Lukas Fellechner
For adding multithreading to the DASH decoder initialization,
the open_demux_for_component() method must be split up into two parts:

begin_open_demux_for_component(): Opens the stream and does probing
and format detection. This can be run in parallel.

end_open_demux_for_component(): Creates the AVStreams and adds
them to the common parent AVFormatContext. This method must always be
run synchronously, after all threads are finished.
---
 libavformat/dashdec.c | 42 ++
 1 file changed, 30 insertions(+), 12 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 63bf7e96a5..e82da45e43 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -1918,10 +1918,9 @@ fail:
 return ret;
 }

-static int open_demux_for_component(AVFormatContext *s, struct representation 
*pls)
+static int begin_open_demux_for_component(AVFormatContext *s, struct 
representation *pls)
 {
 int ret = 0;
-int i;

 pls->parent = s;
 pls->cur_seq_no  = calc_cur_seg_no(s, pls);
@@ -1931,9 +1930,15 @@ static int open_demux_for_component(AVFormatContext *s, 
struct representation *p
 }

 ret = reopen_demux_for_component(s, pls);
-if (ret < 0) {
-goto fail;
-}
+
+return ret;
+}
+
+static int end_open_demux_for_component(AVFormatContext *s, struct 
representation *pls)
+{
+int ret = 0;
+int i;
+
 for (i = 0; i < pls->ctx->nb_streams; i++) {
 AVStream *st = avformat_new_stream(s, NULL);
 AVStream *ist = pls->ctx->streams[i];
@@ -1965,6 +1970,19 @@ fail:
 return ret;
 }

+static int open_demux_for_component(AVFormatContext* s, struct representation* 
pls)
+{
+int ret = 0;
+
+ret = begin_open_demux_for_component(s, pls);
+if (ret < 0)
+return ret;
+
+ret = end_open_demux_for_component(s, pls);
+
+return ret;
+}
+
 static int is_common_init_section_exist(struct representation **pls, int n_pls)
 {
 struct fragment *first_init_section = pls[0]->init_section;
@@ -2040,9 +2058,15 @@ static int dash_read_header(AVFormatContext *s)
 av_dict_set(&c->avio_opts, "seekable", "0", 0);
 }

-if(c->n_videos)
+if (c->n_videos)
 c->is_init_section_common_video = 
is_common_init_section_exist(c->videos, c->n_videos);

+if (c->n_audios)
+c->is_init_section_common_audio = 
is_common_init_section_exist(c->audios, c->n_audios);
+
+if (c->n_subtitles)
+c->is_init_section_common_subtitle = 
is_common_init_section_exist(c->subtitles, c->n_subtitles);
+
 /* Open the demuxer for video and audio components if available */
 for (i = 0; i < c->n_videos; i++) {
 rep = c->videos[i];
@@ -2059,9 +2083,6 @@ static int dash_read_header(AVFormatContext *s)
 ++stream_index;
 }

-if(c->n_audios)
-c->is_init_section_common_audio = 
is_common_init_section_exist(c->audios, c->n_audios);
-
 for (i = 0; i < c->n_audios; i++) {
 rep = c->audios[i];
 if (i > 0 && c->is_init_section_common_audio) {
@@ -2077,9 +2098,6 @@ static int dash_read_header(AVFormatContext *s)
 ++stream_index;
 }

-if (c->n_subtitles)
-c->is_init_section_common_subtitle = 
is_common_init_section_exist(c->subtitles, c->n_subtitles);
-
 for (i = 0; i < c->n_subtitles; i++) {
 rep = c->subtitles[i];
 if (i > 0 && c->is_init_section_common_subtitle) {
--
2.28.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v4 4/4] lavf/dashdec: Fix indentation after adding multithreading

2022-09-05 Thread Lukas Fellechner
Whitespace change only. No functional changes.
---
 libavformat/dashdec.c | 74 +--
 1 file changed, 37 insertions(+), 37 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 19e657d836..22f727da3b 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -2377,54 +2377,54 @@ static int dash_read_header(AVFormatContext *s)
 }
 else
 {
-/* Open the demuxer for video and audio components if available */
-for (i = 0; i < c->n_videos; i++) {
-rep = c->videos[i];
-if (i > 0 && c->is_init_section_common_video) {
-ret = copy_init_section(rep, c->videos[0]);
-if (ret < 0)
+/* Open the demuxer for video and audio components if available */
+for (i = 0; i < c->n_videos; i++) {
+rep = c->videos[i];
+if (i > 0 && c->is_init_section_common_video) {
+ret = copy_init_section(rep, c->videos[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);
+
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
+for (i = 0; i < c->n_audios; i++) {
+rep = c->audios[i];
+if (i > 0 && c->is_init_section_common_audio) {
+ret = copy_init_section(rep, c->audios[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);

-for (i = 0; i < c->n_audios; i++) {
-rep = c->audios[i];
-if (i > 0 && c->is_init_section_common_audio) {
-ret = copy_init_section(rep, c->audios[0]);
-if (ret < 0)
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
+for (i = 0; i < c->n_subtitles; i++) {
+rep = c->subtitles[i];
+if (i > 0 && c->is_init_section_common_subtitle) {
+ret = copy_init_section(rep, c->subtitles[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);

-for (i = 0; i < c->n_subtitles; i++) {
-rep = c->subtitles[i];
-if (i > 0 && c->is_init_section_common_subtitle) {
-ret = copy_init_section(rep, c->subtitles[0]);
-if (ret < 0)
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
-
-if (!stream_index)
-return AVERROR_INVALIDDATA;
+if (!stream_index)
+return AVERROR_INVALIDDATA;
 }

 /* Create a program */
--
2.28.0.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] slicethread: Limit the automatic number of threads to 16

2022-09-06 Thread Lukas Fellechner
>Gesendet: Montag, 05. September 2022 um 21:58 Uhr
>Von: "Martin Storsjö" 
>An: ffmpeg-devel@ffmpeg.org
>Betreff: Re: [FFmpeg-devel] [PATCH] slicethread: Limit the automatic number of 
>threads to 16
>On Mon, 5 Sep 2022, Martin Storsjö wrote:
>
>> This matches a similar cap on the number of automatic threads
>> in libavcodec/pthread_slice.c.
>>
>> On systems with lots of cores, this does speed things up in
>> general (measurable on the level of the runtime of running
>> "make fate"), and fixes a couple fate failures in 32 bit mode on
>> such machines (where spawning a huge number of threads runs
>> out of address space).
>> ---
>
> On second thought - this observation that it speeds up "make -j$(nproc)
> fate" isn't surprising at all; as long as there are jobs to saturate all
> cores with the make level parallelism anyway, any threading within each
> job just adds extra overhead, nothing more.
>
> // Martin

Agreed, this observation of massively parallel test runs does not tell
much about real world performance.
There are really two separate issues here:

1. Running out of address space in 32-bit processes

It probably makes sense to limit auto threads to 16, but it should only
be done in 32-bit processes. A 64-bit process should never run out of
address space. We should not cripple high end machines running
64-bit applications.


Sidenotes about "it does not make sense to have more than 16 slices":

On 8K video, when using 32 threads, each thread will process 256 lines
or about 1MP (> FullHD!). Sure makes sense to me. But even for sw decoding
4K video, having more than 16 threads on a powerful machine makes sense.

Intel's next desktop CPUs will have up to 24 physical cores. The
proposed change would limit them to use only 16 cores, even on 64-bit.


2. Spawning too many threads when "auto" is used in multiple places

This can indeed be an efficiency problem, although probably not major.
Since usually only one part of the pipeline is active at any time,
many of the threads will be sleeping, consuming very little resources.

The issue only affects certain scenarios. If someone has such
a scenario and wants to optimize, they could explicitly set threads to 
a lower value, and see if it helps.

Putting an arbitrary limit on threads would only "solve" this issue
for the biggest CPUs (which have more than enough power anyways),
at the cost of crippling their performance in other scenarios.

A "normal" <= 8 core CPU might still end up with 16 threads for
the decoder, 16 threads for effects and 16 threads for encoding,
with 2/3 of them sleeping at any time.

--> The issue affects only certain scenarios. The proposed fix only
fixes it for a minority of all PCs, while it cripples performance
of these PCs in other scenarios.

--> I do not think that this 16 threads limit is a good idea.
IMHO "auto" should always use the logical CPU count,
except for 32-bit applications.

The only true solution to this problem would be adding a shared
thread pool. The application would create the pool when it is started,
with the number of logical CPU cores as maximum (maybe limit on 32 bits).
It passes this to all created decoders/encoders/filters. But doing this
correctly is a major task, and it would require major rework in all areas
where multi threading is used now. Not sure if the problem is really big
enough to justify this effort.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] slicethread: Limit the automatic number of threads to 16

2022-09-11 Thread Lukas Fellechner
Gesendet: Dienstag, 06. September 2022 um 23:11 Uhr
Von: "Andreas Rheinhardt" 
An: ffmpeg-devel@ffmpeg.org
Betreff: Re: [FFmpeg-devel] [PATCH] slicethread: Limit the automatic number of 
threads to 16
Lukas Fellechner:
>> 1. Running out of address space in 32-bit processes
>>
>> It probably makes sense to limit auto threads to 16, but it should only
>> be done in 32-bit processes. A 64-bit process should never run out of
>> address space. We should not cripple high end machines running
>> 64-bit applications.
>>
>>
>> Sidenotes about "it does not make sense to have more than 16 slices":
>>
>> On 8K video, when using 32 threads, each thread will process 256 lines
>> or about 1MP (> FullHD!). Sure makes sense to me. But even for sw decoding
>> 4K video, having more than 16 threads on a powerful machine makes sense.
>>
>> Intel's next desktop CPUs will have up to 24 physical cores. The
>> proposed change would limit them to use only 16 cores, even on 64-bit.
>
> This part is completely wrong: You can always set the number of threads
> manually.
> (Btw: 1. 8K is the horizontal resolution; the vertical resolution of it
> is 4360 (when using 16:9), so every thread processes 135 lines which
> have as many pixels as 540 lines of FullHD. 2. FullHD has about 2MP.)
> 
> - Andreas

You are right. What I ment was: When someone does not explicitly set
threads, then only 16 of his 24 cores will be used. I know that it is
always possible to manually override the auto value without limits.

And indeed I somehow confused the resolutions. Sill each thread would
process 1MP of pixel data, which is a lot of data.

- Lukas

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] slicethread: Limit the automatic number of threads to 16

2022-09-11 Thread Lukas Fellechner
>> 2. Spawning too many threads when "auto" is used in multiple places
>>
>> This can indeed be an efficiency problem, although probably not major.
>> Since usually only one part of the pipeline is active at any time,
>> many of the threads will be sleeping, consuming very little resources.
>
> For 32 bit processes running out of address space, yes, the issue is with
> "auto" being used in many places at once.
>
> But in general, allowing arbitrarily high numbers of auto threads isn't
> beneficial - the optimal cap of threads depends a lot on the content at
> hand.
>
> The system I'm testing on has 160 cores - and it's quite certain that
> doing slice threading with 160 slices doesn't make sense. Maybe the cap of
> 16 is indeed too low - I don't mind raising it to 32 or something like
> that. Ideally, the auto mechanism would factor in the resolution of the
> content.
>
> Just for arguments sake - here's the output from 'time ffmpeg ...' for a
> fairly straightforward transcode (decode, transpose, scale, encode), 1080p
> input 10bit, 720p output 8bit, with explicitly setting the number of
> threads ("ffmpeg -threads N -i input -threads N -filter_threads N
> output").
>
> 12:
> real 0m25.079s
> user 5m22.318s
> sys 0m5.047s
>
> 16:
> real 0m19.967s
> user 6m3.607s
> sys 0m9.112s
>
> 20:
> real 0m20.853s
> user 6m21.841s
> sys 0m28.829s
>
> 24:
> real 0m20.642s
> user 6m28.022s
> sys 1m1.262s
>
> 32:
> real 0m29.785s
> user 6m8.442s
> sys 4m45.290s
>
> 64:
> real 1m0.808s
> user 6m31.065s
> sys 40m44.598s
>
> I'm not testing this with 160 threads for each stage, since 64 already was
> painfully slow - while you suggest that using threads==cores always should
> be preferred, regardless of the number of cores. The optimum here seems to
> be somewhere between 16 and 20.

These are interesting scores. I would not have expected such a dramatic
effect of having too many threads. You are probably right that always using
the core count as auto threads is not such a good idea.

But the encoding part works on 720p, so there each of the 64 threads only
has 11 lines and 14.000 pixels to process, which is really not much.
I do not have a CPU with so many cores, but when doing 4K -> 4K transcode,
I sure see a benefit of using 32 vs 16 cores.

Maybe the best approach would really be to decide auto thread count
on the amount of pixels to process (I would not use line count because
when line count doubles, the pixel count usually goes up by factor 4).
This would probably need some more test data. I will also try to do some
testing on my side.

- Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH initialization

2022-09-11 Thread Lukas Fellechner

 
 
 

Gesendet: Montag, 05. September 2022 um 12:45 Uhr
Von: "Andreas Rheinhardt" 
An: ffmpeg-devel@ffmpeg.org
Betreff: Re: [FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH 
initialization
Lukas Fellechner:
>>> Moreover, older pthread standards did not allow to use
>>> PTHREAD_MUTEX_INITIALIZER with non-static mutexes, so I don't know
>>> whether we can use that. Also our pthreads-wrapper on top of
>>> OS/2-threads does not provide PTHREAD_COND_INITIALIZER (which is used
>>> nowhere in the codebase).
>>
>> I missed that detail about the initializer macro. Thank you for clearing
>> that up.
>>
>> After looking more into threads implementation in ffmpeg, I wonder if I
>> really need to check any results of init/destroy or other functions.
>> In slicethreads.c, there is zero checking on any of the lock functions.
>> The pthreads-based implementation does internally check the results of all
>> function calls and calls abort() in case of errors ("strict_" wrappers).
>> The Win32 implementation uses SRW locks which cannot even return errors.
>> And the OS2 implementation returns 0 on all calls as well.
>>
>> So right now, I think that I should continue with normal _init() calls
>> (no macros) and drop all error checking, just like slicethread does.
>> Are you fine with that approach?
> 
> Zero checking is our old approach; the new approach checks for errors
> and ensures that only mutexes/condition variables that have been
> properly initialized are destroyed. See ff_pthread_init/free in
> libavcodec/pthread.c (you can't use this in libavformat, because these
> functions are local to libavcodec).
> 
> - Andreas

I was able to switch to using the slicethread implementation. It has a
very minor delay on init, because it waits for all threads to fully start
up before continueing. But it is only few ms and not worth adding a new
implementation just for that.

I also changed the initialization and release of mutex and conds, with
full return code checking and safe release.

There was one cross-thread issue I needed to address.

A multi hour duration test (connecting in endless loop) did not show
any issues after fixing the avio_opts cross thread access.

Please see the v4 patch for all the changes.

- Lukas
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH] avcodec: Assert on codec->encode2 in encode_audio2

2015-08-27 Thread Lukas Blubb
2015-08-27 12:45 GMT+02:00 wm4 :
> [..]
>
> When can this happen? Shouldn't it just return an error somewhere if
> this is not an encoder?

I'm working on a Rust wrapper library and apparently am doing something
wrong (it segfaulted). `avcodec_encode_video2()` has this assertion, so I
thought it might fit here aswell.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH] libavcodec: v4l2m2m: make sure to unref avpkt

2018-06-26 Thread Lukas Rusak
This was found using valgrind. Using this patch there is no more memleak 
present.
---
 libavcodec/v4l2_m2m_dec.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index 598dc10781..710e40efd8 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -149,11 +149,14 @@ static int v4l2_receive_frame(AVCodecContext *avctx, 
AVFrame *frame)
 
 if (avpkt.size) {
 ret = v4l2_try_start(avctx);
-if (ret)
+if (ret) {
+av_packet_unref(&avpkt);
 return 0;
+}
 }
 
 dequeue:
+av_packet_unref(&avpkt);
 return ff_v4l2_context_dequeue_frame(capture, frame);
 }
 
-- 
2.17.0

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 2/4] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-08-03 Thread Lukas Rusak
This allows for a zero-copy output by exporting the v4l2 buffer then wrapping 
that buffer
in the AVDRMFrameDescriptor like it is done in rkmpp.

This has been in use for quite some time with great success on many platforms 
including:
 - Amlogic S905
 - Raspberry Pi
 - i.MX6
 - Dragonboard 410c

This was developed in conjunction with Kodi to allow handling the zero-copy 
buffer rendering.
A simply utility for testing is also available here: 
https://github.com/BayLibre/ffmpeg-drm

todo:
 - allow selecting pixel format output from decoder
 - allow configuring amount of output and capture buffers

V2:
 - allow selecting AV_PIX_FMT_DRM_PRIME

V3:
 - use get_format to select AV_PIX_FMT_DRM_PRIME
 - use hw_configs
 - add handling of AV_PIX_FMT_YUV420P format (for raspberry pi)
 - add handling of AV_PIX_FMT_YUYV422 format (for i.MX6 coda decoder)
---
 libavcodec/v4l2_buffers.c | 216 --
 libavcodec/v4l2_buffers.h |   4 +
 libavcodec/v4l2_context.c |  40 ++-
 libavcodec/v4l2_m2m.c |   4 +-
 libavcodec/v4l2_m2m.h |   3 +
 libavcodec/v4l2_m2m_dec.c |  23 
 6 files changed, 253 insertions(+), 37 deletions(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index aef911f3bb..e5c46ac81e 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -21,6 +21,7 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
  */
 
+#include 
 #include 
 #include 
 #include 
@@ -29,6 +30,7 @@
 #include 
 #include "libavcodec/avcodec.h"
 #include "libavcodec/internal.h"
+#include "libavutil/hwcontext.h"
 #include "v4l2_context.h"
 #include "v4l2_buffers.h"
 #include "v4l2_m2m.h"
@@ -203,7 +205,79 @@ static enum AVColorTransferCharacteristic 
v4l2_get_color_trc(V4L2Buffer *buf)
 return AVCOL_TRC_UNSPECIFIED;
 }
 
-static void v4l2_free_buffer(void *opaque, uint8_t *unused)
+static uint8_t * v4l2_get_drm_frame(V4L2Buffer *avbuf)
+{
+AVDRMFrameDescriptor *drm_desc = &avbuf->drm_frame;
+AVDRMLayerDescriptor *layer;
+
+/* fill the DRM frame descriptor */
+drm_desc->nb_objects = avbuf->num_planes;
+drm_desc->nb_layers = 1;
+
+layer = &drm_desc->layers[0];
+layer->nb_planes = avbuf->num_planes;
+
+for (int i = 0; i < avbuf->num_planes; i++) {
+layer->planes[i].object_index = i;
+layer->planes[i].offset = 0;
+layer->planes[i].pitch = avbuf->plane_info[i].bytesperline;
+}
+
+switch (avbuf->context->av_pix_fmt) {
+case AV_PIX_FMT_YUYV422:
+
+layer->format = DRM_FORMAT_YUYV;
+layer->nb_planes = 1;
+
+break;
+
+case AV_PIX_FMT_NV12:
+case AV_PIX_FMT_NV21:
+
+layer->format = avbuf->context->av_pix_fmt == AV_PIX_FMT_NV12 ?
+DRM_FORMAT_NV12 : DRM_FORMAT_NV21;
+
+if (avbuf->num_planes > 1)
+break;
+
+layer->nb_planes = 2;
+
+layer->planes[1].object_index = 0;
+layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
+avbuf->context->format.fmt.pix.height;
+layer->planes[1].pitch = avbuf->plane_info[0].bytesperline;
+break;
+
+case AV_PIX_FMT_YUV420P:
+
+layer->format = DRM_FORMAT_YUV420;
+
+if (avbuf->num_planes > 1)
+break;
+
+layer->nb_planes = 3;
+
+layer->planes[1].object_index = 0;
+layer->planes[1].offset = avbuf->plane_info[0].bytesperline *
+avbuf->context->format.fmt.pix.height;
+layer->planes[1].pitch = avbuf->plane_info[0].bytesperline >> 1;
+
+layer->planes[2].object_index = 0;
+layer->planes[2].offset = layer->planes[1].offset +
+((avbuf->plane_info[0].bytesperline *
+  avbuf->context->format.fmt.pix.height) >> 2);
+layer->planes[2].pitch = avbuf->plane_info[0].bytesperline >> 1;
+break;
+
+default:
+drm_desc->nb_layers = 0;
+break;
+}
+
+return (uint8_t *) drm_desc;
+}
+
+static void v4l2_free_buffer(void *opaque, uint8_t *data)
 {
 V4L2Buffer* avbuf = opaque;
 V4L2m2mContext *s = buf_to_m2mctx(avbuf);
@@ -227,27 +301,47 @@ static void v4l2_free_buffer(void *opaque, uint8_t 
*unused)
 }
 }
 
-static int v4l2_buf_to_bufref(V4L2Buffer *in, int plane, AVBufferRef **buf)
+static int v4l2_buffer_export_drm(V4L2Buffer* avbuf)
 {
-V4L2m2mContext *s = buf_to_m2mctx(in);
+struct v4l2_exportbuffer expbuf;
+int i, ret;
 
-if (plane >= in->num_planes)
-return AVERROR(EINVAL);
+for (i = 0; i < avbuf->num_planes; i++) {
+memset(&expbuf, 0, sizeof(expbuf));
 
-/* even though most encoders return 0 in data_offset encoding vp8 does 
require this value */
-*buf = av_buffer_create((char *)in->plane_info[plane].mm_addr + 
in->planes[plane].data_offset,
-in->plane_info[plane].length, v4l2_free_buffer, 
in, 0);
-if (!*buf)
-return AVERROR(ENOMEM);
+expbuf.index = avbuf->buf.index;
+  

[FFmpeg-devel] [PATCH 4/4] libavcodec: v4l2m2m: fix error handling during buffer init

2018-08-03 Thread Lukas Rusak
From: Jorge Ramirez-Ortiz 

Signed-off-by: Jorge Ramirez-Ortiz 
---
 libavcodec/v4l2_context.c | 19 ---
 libavcodec/v4l2_m2m_dec.c |  9 +++--
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/libavcodec/v4l2_context.c b/libavcodec/v4l2_context.c
index 9457fadb1e..fd3161ce2f 100644
--- a/libavcodec/v4l2_context.c
+++ b/libavcodec/v4l2_context.c
@@ -263,6 +263,12 @@ static V4L2Buffer* v4l2_dequeue_v4l2buf(V4L2Context *ctx, 
int timeout)
 /* if we are draining and there are no more capture buffers queued in the 
driver we are done */
 if (!V4L2_TYPE_IS_OUTPUT(ctx->type) && ctx_to_m2mctx(ctx)->draining) {
 for (i = 0; i < ctx->num_buffers; i++) {
+/* catpture buffer initialization happens during decode hence
+ * detection happens at runtime
+ */
+if (!ctx->buffers)
+break;
+
 if (ctx->buffers[i].status == V4L2BUF_IN_DRIVER)
 goto start;
 }
@@ -724,9 +730,8 @@ int ff_v4l2_context_init(V4L2Context* ctx)
 ctx->buffers[i].context = ctx;
 ret = ff_v4l2_buffer_initialize(&ctx->buffers[i], i);
 if (ret < 0) {
-av_log(logger(ctx), AV_LOG_ERROR, "%s buffer initialization 
(%s)\n", ctx->name, av_err2str(ret));
-av_free(ctx->buffers);
-return ret;
+av_log(logger(ctx), AV_LOG_ERROR, "%s buffer[%d] initialization 
(%s)\n", ctx->name, i, av_err2str(ret));
+goto error;
 }
 }
 
@@ -739,4 +744,12 @@ int ff_v4l2_context_init(V4L2Context* ctx)
 V4L2_TYPE_IS_MULTIPLANAR(ctx->type) ? 
ctx->format.fmt.pix_mp.plane_fmt[0].bytesperline : 
ctx->format.fmt.pix.bytesperline);
 
 return 0;
+
+error:
+v4l2_release_buffers(ctx);
+
+av_free(ctx->buffers);
+ctx->buffers = NULL;
+
+return ret;
 }
diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index 29d894492f..c4f4f7837f 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -92,8 +92,8 @@ static int v4l2_try_start(AVCodecContext *avctx)
 if (!capture->buffers) {
 ret = ff_v4l2_context_init(capture);
 if (ret) {
-av_log(avctx, AV_LOG_DEBUG, "can't request output buffers\n");
-return ret;
+av_log(avctx, AV_LOG_ERROR, "can't request capture buffers\n");
+return AVERROR(ENOMEM);
 }
 }
 
@@ -157,6 +157,11 @@ static int v4l2_receive_frame(AVCodecContext *avctx, 
AVFrame *frame)
 ret = v4l2_try_start(avctx);
 if (ret) {
 av_packet_unref(&avpkt);
+
+/* cant recover */
+if (ret == AVERROR(ENOMEM))
+return ret;
+
 return 0;
 }
 }
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/4] libavcodec: v4l2m2m: fix indentation and add M2MDEC_CLASS

2018-08-03 Thread Lukas Rusak
This just makes the M2MDEC_CLASS similar to how it is done in rkmpp. It looks
clean and has proper indentation
---
 libavcodec/v4l2_m2m_dec.c | 46 ---
 1 file changed, 24 insertions(+), 22 deletions(-)

diff --git a/libavcodec/v4l2_m2m_dec.c b/libavcodec/v4l2_m2m_dec.c
index 710e40efd8..7926e25efa 100644
--- a/libavcodec/v4l2_m2m_dec.c
+++ b/libavcodec/v4l2_m2m_dec.c
@@ -205,29 +205,31 @@ static const AVOption options[] = {
 { NULL},
 };
 
+#define M2MDEC_CLASS(NAME) \
+static const AVClass v4l2_m2m_ ## NAME ## _dec_class = { \
+.class_name = #NAME "_v4l2_m2m_decoder", \
+.item_name  = av_default_item_name, \
+.option = options, \
+.version= LIBAVUTIL_VERSION_INT, \
+};
+
 #define M2MDEC(NAME, LONGNAME, CODEC, bsf_name) \
-static const AVClass v4l2_m2m_ ## NAME ## _dec_class = {\
-.class_name = #NAME "_v4l2_m2m_decoder",\
-.item_name  = av_default_item_name,\
-.option = options,\
-.version= LIBAVUTIL_VERSION_INT,\
-};\
-\
-AVCodec ff_ ## NAME ## _v4l2m2m_decoder = { \
-.name   = #NAME "_v4l2m2m" ,\
-.long_name  = NULL_IF_CONFIG_SMALL("V4L2 mem2mem " LONGNAME " decoder 
wrapper"),\
-.type   = AVMEDIA_TYPE_VIDEO,\
-.id = CODEC ,\
-.priv_data_size = sizeof(V4L2m2mPriv),\
-.priv_class = &v4l2_m2m_ ## NAME ## _dec_class,\
-.init   = v4l2_decode_init,\
-.receive_frame  = v4l2_receive_frame,\
-.close  = ff_v4l2_m2m_codec_end,\
-.bsfs   = bsf_name, \
-.capabilities   = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DELAY | \
-  AV_CODEC_CAP_AVOID_PROBING, \
-.wrapper_name   = "v4l2m2m", \
-};
+M2MDEC_CLASS(NAME) \
+AVCodec ff_ ## NAME ## _v4l2m2m_decoder = { \
+.name   = #NAME "_v4l2m2m" , \
+.long_name  = NULL_IF_CONFIG_SMALL("V4L2 mem2mem " LONGNAME " 
decoder wrapper"), \
+.type   = AVMEDIA_TYPE_VIDEO, \
+.id = CODEC , \
+.priv_data_size = sizeof(V4L2m2mPriv), \
+.priv_class = &v4l2_m2m_ ## NAME ## _dec_class, \
+.init   = v4l2_decode_init, \
+.receive_frame  = v4l2_receive_frame, \
+.close  = ff_v4l2_m2m_codec_end, \
+.bsfs   = bsf_name, \
+.capabilities   = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DELAY, \
+ AV_CODEC_CAP_AVOID_PROBING, \
+.wrapper_name   = "v4l2m2m", \
+};
 
 M2MDEC(h264,  "H.264", AV_CODEC_ID_H264,   "h264_mp4toannexb");
 M2MDEC(hevc,  "HEVC",  AV_CODEC_ID_HEVC,   "hevc_mp4toannexb");
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 3/4] libavcodec: v4l2m2m: adjust formatting

2018-08-03 Thread Lukas Rusak
just some simple formatting fixes that unify the code quality
---
 libavcodec/v4l2_buffers.c | 23 +++
 libavcodec/v4l2_buffers.h |  1 -
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
index e5c46ac81e..897c3c4636 100644
--- a/libavcodec/v4l2_buffers.c
+++ b/libavcodec/v4l2_buffers.c
@@ -401,7 +401,8 @@ static int v4l2_bufref_to_buf(V4L2Buffer *out, int plane, 
const uint8_t* data, i
 bytesused = FFMIN(size, out->plane_info[plane].length);
 length = out->plane_info[plane].length;
 
-memcpy(out->plane_info[plane].mm_addr, data, FFMIN(size, 
out->plane_info[plane].length));
+memcpy(out->plane_info[plane].mm_addr, data,
+   FFMIN(size, out->plane_info[plane].length));
 
 if (V4L2_TYPE_IS_MULTIPLANAR(out->buf.type)) {
 out->planes[plane].bytesused = bytesused;
@@ -425,7 +426,10 @@ int ff_v4l2_buffer_avframe_to_buf(const AVFrame *frame, 
V4L2Buffer* out)
 int i, ret;
 
 for(i = 0; i < out->num_planes; i++) {
-ret = v4l2_bufref_to_buf(out, i, frame->buf[i]->data, 
frame->buf[i]->size, frame->buf[i]);
+ret = v4l2_bufref_to_buf(out, i,
+frame->buf[i]->data,
+frame->buf[i]->size,
+frame->buf[i]);
 if (ret)
 return ret;
 }
@@ -480,8 +484,8 @@ int ff_v4l2_buffer_buf_to_avframe(AVFrame *frame, 
V4L2Buffer *avbuf)
 /* 2. get frame information */
 frame->key_frame = !!(avbuf->buf.flags & V4L2_BUF_FLAG_KEYFRAME);
 frame->color_primaries = v4l2_get_color_primaries(avbuf);
-frame->colorspace = v4l2_get_color_space(avbuf);
 frame->color_range = v4l2_get_color_range(avbuf);
+frame->colorspace = v4l2_get_color_space(avbuf);
 frame->color_trc = v4l2_get_color_trc(avbuf);
 frame->pts = v4l2_get_pts(avbuf);
 
@@ -507,7 +511,8 @@ int ff_v4l2_buffer_buf_to_avpkt(AVPacket *pkt, V4L2Buffer 
*avbuf)
 if (ret)
 return ret;
 
-pkt->size = V4L2_TYPE_IS_MULTIPLANAR(avbuf->buf.type) ? 
avbuf->buf.m.planes[0].bytesused : avbuf->buf.bytesused;
+pkt->size = V4L2_TYPE_IS_MULTIPLANAR(avbuf->buf.type) ?
+avbuf->buf.m.planes[0].bytesused : avbuf->buf.bytesused;
 pkt->data = pkt->buf->data;
 
 if (avbuf->buf.flags & V4L2_BUF_FLAG_KEYFRAME)
@@ -563,6 +568,7 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int index)
 /* in MP, the V4L2 API states that buf.length means num_planes */
 if (avbuf->num_planes >= avbuf->buf.length)
 break;
+
 if (avbuf->buf.m.planes[avbuf->num_planes].length)
 avbuf->num_planes++;
 }
@@ -579,12 +585,14 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int 
index)
 avbuf->plane_info[i].length = avbuf->buf.m.planes[i].length;
 avbuf->plane_info[i].mm_addr = mmap(NULL, 
avbuf->buf.m.planes[i].length,
PROT_READ | PROT_WRITE, MAP_SHARED,
-   buf_to_m2mctx(avbuf)->fd, 
avbuf->buf.m.planes[i].m.mem_offset);
+   buf_to_m2mctx(avbuf)->fd,
+   
avbuf->buf.m.planes[i].m.mem_offset);
 } else {
 avbuf->plane_info[i].length = avbuf->buf.length;
 avbuf->plane_info[i].mm_addr = mmap(NULL, avbuf->buf.length,
   PROT_READ | PROT_WRITE, MAP_SHARED,
-  buf_to_m2mctx(avbuf)->fd, 
avbuf->buf.m.offset);
+  buf_to_m2mctx(avbuf)->fd,
+  avbuf->buf.m.offset);
 }
 
 if (avbuf->plane_info[i].mm_addr == MAP_FAILED)
@@ -594,9 +602,8 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int index)
 avbuf->status = V4L2BUF_AVAILABLE;
 
 if (V4L2_TYPE_IS_MULTIPLANAR(ctx->type)) {
-avbuf->buf.m.planes = avbuf->planes;
 avbuf->buf.length   = avbuf->num_planes;
-
+avbuf->buf.m.planes = avbuf->planes;
 } else {
 avbuf->buf.bytesused = avbuf->planes[0].bytesused;
 avbuf->buf.length= avbuf->planes[0].length;
diff --git a/libavcodec/v4l2_buffers.h b/libavcodec/v4l2_buffers.h
index a8a50ecc65..c609a6c676 100644
--- a/libavcodec/v4l2_buffers.h
+++ b/libavcodec/v4l2_buffers.h
@@ -131,5 +131,4 @@ int ff_v4l2_buffer_initialize(V4L2Buffer* avbuf, int index);
  */
 int ff_v4l2_buffer_enqueue(V4L2Buffer* avbuf);
 
-
 #endif // AVCODEC_V4L2_BUFFERS_H
-- 
2.17.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


Re: [FFmpeg-devel] [PATCH 2/4] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-08-16 Thread Lukas Rusak
On Sat, 2018-08-04 at 22:43 +0100, Mark Thompson wrote:
> On 04/08/18 01:40, Lukas Rusak wrote:
> > This allows for a zero-copy output by exporting the v4l2 buffer
> > then wrapping that buffer
> > in the AVDRMFrameDescriptor like it is done in rkmpp.
> > 
> > This has been in use for quite some time with great success on many
> > platforms including:
> >  - Amlogic S905
> >  - Raspberry Pi
> >  - i.MX6
> >  - Dragonboard 410c
> > 
> > This was developed in conjunction with Kodi to allow handling the
> > zero-copy buffer rendering.
> > A simply utility for testing is also available here: 
> > https://github.com/BayLibre/ffmpeg-drm
> > 
> > todo:
> >  - allow selecting pixel format output from decoder
> >  - allow configuring amount of output and capture buffers
> > 
> > V2:
> >  - allow selecting AV_PIX_FMT_DRM_PRIME
> > 
> > V3:
> >  - use get_format to select AV_PIX_FMT_DRM_PRIME
> >  - use hw_configs
> >  - add handling of AV_PIX_FMT_YUV420P format (for raspberry pi)
> >  - add handling of AV_PIX_FMT_YUYV422 format (for i.MX6 coda
> > decoder)
> > ---
> >  libavcodec/v4l2_buffers.c | 216 
> > --
> >  libavcodec/v4l2_buffers.h |   4 +
> >  libavcodec/v4l2_context.c |  40 ++-
> >  libavcodec/v4l2_m2m.c |   4 +-
> >  libavcodec/v4l2_m2m.h |   3 +
> >  libavcodec/v4l2_m2m_dec.c |  23 
> >  6 files changed, 253 insertions(+), 37 deletions(-)
> 
> The v4l2_m2m decoders need to depend on libdrm in configure for
> this.  (And if you don't want that as a hard dependency then you'll
> need #ifdefs everywhere.)

Yes, I'll update the patch to include libdrm

> 
> > diff --git a/libavcodec/v4l2_buffers.c b/libavcodec/v4l2_buffers.c
> > index aef911f3bb..e5c46ac81e 100644
> > --- a/libavcodec/v4l2_buffers.c
> > +++ b/libavcodec/v4l2_buffers.c
> > @@ -21,6 +21,7 @@
> >   * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
> > 02110-1301 USA
> >   */
> >  
> > +#include 
> 
> Don't include the outer path, pkg-config deals with it.  (The path
> you've picked is wrong for a default install of current libdrm.)
> 

I'll fix that. Thanks!

> >  #include 
> >  #include 
> >  #include 
> > @@ -29,6 +30,7 @@
> >  #include 
> >  #include "libavcodec/avcodec.h"
> >  #include "libavcodec/internal.h"
> > +#include "libavutil/hwcontext.h"
> >  #include "v4l2_context.h"
> >  #include "v4l2_buffers.h"
> >  #include "v4l2_m2m.h"
> > @@ -203,7 +205,79 @@ static enum AVColorTransferCharacteristic
> > v4l2_get_color_trc(V4L2Buffer *buf)
> >  return AVCOL_TRC_UNSPECIFIED;
> >  }
> >  
> > -static void v4l2_free_buffer(void *opaque, uint8_t *unused)
> > +static uint8_t * v4l2_get_drm_frame(V4L2Buffer *avbuf)
> > +{
> > +AVDRMFrameDescriptor *drm_desc = &avbuf->drm_frame;
> > +AVDRMLayerDescriptor *layer;
> > +
> > +/* fill the DRM frame descriptor */
> > +drm_desc->nb_objects = avbuf->num_planes;
> > +drm_desc->nb_layers = 1;
> > +
> > +layer = &drm_desc->layers[0];
> > +layer->nb_planes = avbuf->num_planes;
> > +
> > +for (int i = 0; i < avbuf->num_planes; i++) {
> > +layer->planes[i].object_index = i;
> > +layer->planes[i].offset = 0;
> > +layer->planes[i].pitch = avbuf-
> > >plane_info[i].bytesperline;
> > +}
> > +
> > +switch (avbuf->context->av_pix_fmt) {
> > +case AV_PIX_FMT_YUYV422:
> > +
> > +layer->format = DRM_FORMAT_YUYV;
> > +layer->nb_planes = 1;
> > +
> > +break;
> > +
> > +case AV_PIX_FMT_NV12:
> > +case AV_PIX_FMT_NV21:
> > +
> > +layer->format = avbuf->context->av_pix_fmt ==
> > AV_PIX_FMT_NV12 ?
> > +DRM_FORMAT_NV12 : DRM_FORMAT_NV21;
> > +
> > +if (avbuf->num_planes > 1)
> > +break;
> > +
> > +layer->nb_planes = 2;
> > +
> > +layer->planes[1].object_index = 0;
> > +layer->planes[1].offset = avbuf-
> > >plane_info[0].bytesperline *
> > +avbuf->context->format.fmt.pix.height;
> > +layer->planes[1].pitch = avbuf-
> > >plane_info[0].bytesperline;
> 
> To confirm, it's necessarily true tha

Re: [FFmpeg-devel] [PATCH 2/4] libavcodec: v4l2m2m: output AVDRMFrameDescriptor

2018-08-16 Thread Lukas Rusak
Just an FYI i will be developing here if anyone wants to comment and/or
PR other changes for V4.

https://github.com/lrusak/FFmpeg/commits/v4l2-drmprime-v4



> ___
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel


[FFmpeg-devel] [PATCH 1/1] lavf/dashdec: Multithreaded DASH initialization

2022-08-20 Thread Lukas Fellechner
This patch adds multithreading support to DASH initialization. Initializing 
DASH streams is currently slow, because each individual stream is opened and 
probed sequentially. With DASH streams often having somewhere between 10-20 
substreams, this can easily take up to half a minute on slow connections. This 
patch adds an "init-threads" option, specifying the max number of threads to 
use for parallel probing and initialization of substreams. If "init-threads" is 
set to a value larger than 1, multiple worker threads are spun up to massively 
bring down init times.

Here is a free DASH stream for testing:

http://www.bok.net/dash/tears_of_steel/cleartext/stream.mpd

It has 7 substreams. I currently get init times of 3.5 seconds without patch. 
The patch brings it down to about 0.5 seconds, so using 7 threads nearly cut 
down init times by factor 7. On a slower connection (100mbit), the same stream 
took 7-8 seconds to initialize, the patch brought it down to just over 1 second.

In the current patch, the behavior is disabled by default (init-threads = 0). 
But I think it could make sense to enable it by default, maybe with a 
reasonable value of 8? Not sure though, open for discussion.

Some notes on the actual implementation:
- DASH streams sometimes share a common init section. If this is the case, a 
mutex and condition is used, to make sure that the first stream reads the 
common section before the following streams start initialization.
- Only the init and probing part is done in parallel. After all threads are 
joined, I collect the results and add the AVStreams to the parent 
AVFormatContext. That is why I split open_demux_for_component() into 
begin_open_demux_for_component() and end_open_demux_for_component().
- I tried to do this as clean as possible and added multiple comments.
- Multithreading is never simple, so a proper review is needed.

If this gets merged, I might try to do the same for HLS.

This is my first PR by the way, so please be nice :)

Lukas

0001-lavf-dashdec-Multithreaded-DASH-initialization.patch
Description: Binary data
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH 1/1] lavf/dashdec: Multithreaded DASH initialization

2022-08-20 Thread Lukas Fellechner
Trying with inline PATCH since attached file was not showing up...

---

From: Lukas Fellechner 
Subject: [PATCH 1/1] lavf/dashdec: Multithreaded DASH initialization

Initializing DASH streams is currently slow, because each individual stream is 
opened and probed sequentially. With DASH streams often having somewhere 
between 10-20 streams, this can easily take up to half a minute. This patch 
adds an "init-threads" option, specifying the max number of threads to use. 
Multiple worker threads are spun up to massively bring down init times.
---
 libavformat/dashdec.c | 421 +-
 1 file changed, 375 insertions(+), 46 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 63bf7e96a5..69a6c2ba79 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -24,6 +24,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/time.h"
 #include "libavutil/parseutils.h"
+#include "libavutil/thread.h"
 #include "internal.h"
 #include "avio_internal.h"
 #include "dash.h"
@@ -152,6 +153,8 @@ typedef struct DASHContext {
 int max_url_size;
 char *cenc_decryption_key;

+int init_threads;
+
 /* Flags for init section*/
 int is_init_section_common_video;
 int is_init_section_common_audio;
@@ -1918,22 +1921,40 @@ fail:
 return ret;
 }

-static int open_demux_for_component(AVFormatContext *s, struct representation 
*pls)
+static int open_demux_for_component(AVFormatContext* s, struct representation* 
pls)
+{
+int ret = 0;
+
+ret = begin_open_demux_for_component(s, pls);
+if (ret < 0)
+return ret;
+
+ret = end_open_demux_for_component(s, pls);
+
+return ret;
+}
+
+static int begin_open_demux_for_component(AVFormatContext* s, struct 
representation* pls)
 {
 int ret = 0;
-int i;

 pls->parent = s;
-pls->cur_seq_no  = calc_cur_seg_no(s, pls);
+pls->cur_seq_no = calc_cur_seg_no(s, pls);

 if (!pls->last_seq_no) {
 pls->last_seq_no = calc_max_seg_no(pls, s->priv_data);
 }

 ret = reopen_demux_for_component(s, pls);
-if (ret < 0) {
-goto fail;
-}
+
+return ret;
+}
+
+static int end_open_demux_for_component(AVFormatContext* s, struct 
representation* pls)
+{
+int ret = 0;
+int i;
+
 for (i = 0; i < pls->ctx->nb_streams; i++) {
 AVStream *st = avformat_new_stream(s, NULL);
 AVStream *ist = pls->ctx->streams[i];
@@ -2015,6 +2036,131 @@ static void move_metadata(AVStream *st, const char 
*key, char **value)
 }
 }

+struct work_pool_data
+{
+AVFormatContext* ctx;
+struct representation* pls;
+struct representation* common_pls;
+pthread_mutex_t* common_mutex;
+pthread_cond_t* common_condition;
+int is_common;
+int is_started;
+int result;
+};
+
+struct thread_data
+{
+pthread_t thread;
+pthread_mutex_t* mutex;
+struct work_pool_data* work_pool;
+int work_pool_size;
+int is_started;
+};
+
+static void *worker_thread(void *ptr)
+{
+int ret = 0;
+int i;
+struct thread_data* thread_data = (struct thread_data*)ptr;
+struct work_pool_data* work_pool = NULL;
+struct work_pool_data* data = NULL;
+for (;;) {
+
+// get next work item
+pthread_mutex_lock(thread_data->mutex);
+data = NULL;
+work_pool = thread_data->work_pool;
+for (i = 0; i < thread_data->work_pool_size; i++) {
+if (!work_pool->is_started) {
+data = work_pool;
+data->is_started = 1;
+break;
+}
+work_pool++;
+}
+pthread_mutex_unlock(thread_data->mutex);
+
+if (!data) {
+// no more work to do
+return NULL;
+}
+
+// if we are common section provider, init and signal
+if (data->is_common) {
+data->pls->parent = data->ctx;
+ret = update_init_section(data->pls);
+if (ret < 0) {
+pthread_cond_signal(data->common_condition);
+goto end;
+}
+else
+ret = AVERROR(pthread_cond_signal(data->common_condition));
+}
+
+// if we depend on common section provider, wait for signal and copy
+if (data->common_pls) {
+ret = AVERROR(pthread_cond_wait(data->common_condition, 
data->common_mutex));
+if (ret < 0)
+goto end;
+
+if (!data->common_pls->init_sec_buf) {
+goto end;
+ret = AVERROR(EFAULT);
+}
+
+ret = copy_init_section(data->pls, data->common_pls);
+if (ret < 0)
+goto end;
+}
+
+ret = begin_open_demux_for_component(data->ctx, data->pls);
+

Re: [FFmpeg-devel] [PATCH 1/1] lavf/dashdec: Multithreaded DASH initialization

2022-08-21 Thread Lukas Fellechner
>> + struct representation* common_pls;
>> + pthread_mutex_t* common_mutex;
>> + pthread_cond_t* common_condition;
> Should add #if HAVE_THREADS to check if the pthread supported.

You are right, I will add HAVE_THREADS checks.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v2] lavf/dashdec: Multithreaded DASH initialization

2022-08-21 Thread Lukas Fellechner
Initializing DASH streams is currently slow, because each individual stream is 
opened and probed sequentially. With DASH streams often having somewhere 
between 10-20 streams, this can easily take up to half a minute. This patch 
adds an "init-threads" option, specifying the max number of threads to use. 
Multiple worker threads are spun up to massively bring down init times.
---
 libavformat/dashdec.c | 432 +-
 1 file changed, 386 insertions(+), 46 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 63bf7e96a5..7eca3e3415 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -24,6 +24,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/time.h"
 #include "libavutil/parseutils.h"
+#include "libavutil/thread.h"
 #include "internal.h"
 #include "avio_internal.h"
 #include "dash.h"
@@ -152,6 +153,8 @@ typedef struct DASHContext {
 int max_url_size;
 char *cenc_decryption_key;

+int init_threads;
+
 /* Flags for init section*/
 int is_init_section_common_video;
 int is_init_section_common_audio;
@@ -1918,22 +1921,40 @@ fail:
 return ret;
 }

-static int open_demux_for_component(AVFormatContext *s, struct representation 
*pls)
+static int open_demux_for_component(AVFormatContext* s, struct representation* 
pls)
+{
+int ret = 0;
+
+ret = begin_open_demux_for_component(s, pls);
+if (ret < 0)
+return ret;
+
+ret = end_open_demux_for_component(s, pls);
+
+return ret;
+}
+
+static int begin_open_demux_for_component(AVFormatContext* s, struct 
representation* pls)
 {
 int ret = 0;
-int i;

 pls->parent = s;
-pls->cur_seq_no  = calc_cur_seg_no(s, pls);
+pls->cur_seq_no = calc_cur_seg_no(s, pls);

 if (!pls->last_seq_no) {
 pls->last_seq_no = calc_max_seg_no(pls, s->priv_data);
 }

 ret = reopen_demux_for_component(s, pls);
-if (ret < 0) {
-goto fail;
-}
+
+return ret;
+}
+
+static int end_open_demux_for_component(AVFormatContext* s, struct 
representation* pls)
+{
+int ret = 0;
+int i;
+
 for (i = 0; i < pls->ctx->nb_streams; i++) {
 AVStream *st = avformat_new_stream(s, NULL);
 AVStream *ist = pls->ctx->streams[i];
@@ -2015,6 +2036,135 @@ static void move_metadata(AVStream *st, const char 
*key, char **value)
 }
 }

+#if HAVE_THREADS
+
+struct work_pool_data
+{
+AVFormatContext* ctx;
+struct representation* pls;
+struct representation* common_pls;
+pthread_mutex_t* common_mutex;
+pthread_cond_t* common_condition;
+int is_common;
+int is_started;
+int result;
+};
+
+struct thread_data
+{
+pthread_t thread;
+pthread_mutex_t* mutex;
+struct work_pool_data* work_pool;
+int work_pool_size;
+int is_started;
+};
+
+static void *worker_thread(void *ptr)
+{
+int ret = 0;
+int i;
+struct thread_data* thread_data = (struct thread_data*)ptr;
+struct work_pool_data* work_pool = NULL;
+struct work_pool_data* data = NULL;
+for (;;) {
+
+// get next work item
+pthread_mutex_lock(thread_data->mutex);
+data = NULL;
+work_pool = thread_data->work_pool;
+for (i = 0; i < thread_data->work_pool_size; i++) {
+if (!work_pool->is_started) {
+data = work_pool;
+data->is_started = 1;
+break;
+}
+work_pool++;
+}
+pthread_mutex_unlock(thread_data->mutex);
+
+if (!data) {
+// no more work to do
+return NULL;
+}
+
+// if we are common section provider, init and signal
+if (data->is_common) {
+data->pls->parent = data->ctx;
+ret = update_init_section(data->pls);
+if (ret < 0) {
+pthread_cond_signal(data->common_condition);
+goto end;
+}
+else
+ret = AVERROR(pthread_cond_signal(data->common_condition));
+}
+
+// if we depend on common section provider, wait for signal and copy
+if (data->common_pls) {
+ret = AVERROR(pthread_cond_wait(data->common_condition, 
data->common_mutex));
+if (ret < 0)
+goto end;
+
+if (!data->common_pls->init_sec_buf) {
+goto end;
+ret = AVERROR(EFAULT);
+}
+
+ret = copy_init_section(data->pls, data->common_pls);
+if (ret < 0)
+goto end;
+}
+
+ret = begin_open_demux_for_component(data->ctx, data->pls);
+if (ret < 0)
+goto end;
+
+end:
+data->result = ret;
+}
+
+
+return NULL;
+}
+
+static void create_work_pool_data(AVFormatContext* ctx, int stream_index,
+struct representation* pls, struct representation* common_pls,
+struct work_pool_data* init_data, pthread_mutex_t* common_mutex,
+pthread_cond_t* comm

[FFmpeg-devel] [PATCH v3 0/3] lavf/dashdec: Multithreaded DASH initialization

2022-08-23 Thread Lukas Fellechner
Initializing DASH streams is currently slow, because each individual
stream is opened and probed sequentially. With DASH streams often
having somewhere between 10-20 streams, this can easily take up to
half a minute on slow connections.

This patch adds an "init-threads" option, specifying the max number
of threads to use. Multiple worker threads are spun up to massively
bring down init times.
In-Reply-To: 



___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3 1/3] lavf/dashdec: Prepare DASH decoder for multithreading

2022-08-23 Thread Lukas Fellechner
For adding multithreading to the DASH decoder initialization,
the open_demux_for_component() method must be split up into two parts:

begin_open_demux_for_component(): Opens the stream and does probing
and format detection. This can be run in parallel.

end_open_demux_for_component(): Creates the AVStreams and adds
them to the common parent AVFormatContext. This method must always be
run synchronously, after all threads are finished.
---
 libavformat/dashdec.c | 42 ++
 1 file changed, 30 insertions(+), 12 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 63bf7e96a5..e82da45e43 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -1918,10 +1918,9 @@ fail:
 return ret;
 }

-static int open_demux_for_component(AVFormatContext *s, struct representation 
*pls)
+static int begin_open_demux_for_component(AVFormatContext *s, struct 
representation *pls)
 {
 int ret = 0;
-int i;

 pls->parent = s;
 pls->cur_seq_no  = calc_cur_seg_no(s, pls);
@@ -1931,9 +1930,15 @@ static int open_demux_for_component(AVFormatContext *s, 
struct representation *p
 }

 ret = reopen_demux_for_component(s, pls);
-if (ret < 0) {
-goto fail;
-}
+
+return ret;
+}
+
+static int end_open_demux_for_component(AVFormatContext *s, struct 
representation *pls)
+{
+int ret = 0;
+int i;
+
 for (i = 0; i < pls->ctx->nb_streams; i++) {
 AVStream *st = avformat_new_stream(s, NULL);
 AVStream *ist = pls->ctx->streams[i];
@@ -1965,6 +1970,19 @@ fail:
 return ret;
 }

+static int open_demux_for_component(AVFormatContext* s, struct representation* 
pls)
+{
+int ret = 0;
+
+ret = begin_open_demux_for_component(s, pls);
+if (ret < 0)
+return ret;
+
+ret = end_open_demux_for_component(s, pls);
+
+return ret;
+}
+
 static int is_common_init_section_exist(struct representation **pls, int n_pls)
 {
 struct fragment *first_init_section = pls[0]->init_section;
@@ -2040,9 +2058,15 @@ static int dash_read_header(AVFormatContext *s)
 av_dict_set(&c->avio_opts, "seekable", "0", 0);
 }

-if(c->n_videos)
+if (c->n_videos)
 c->is_init_section_common_video = 
is_common_init_section_exist(c->videos, c->n_videos);

+if (c->n_audios)
+c->is_init_section_common_audio = 
is_common_init_section_exist(c->audios, c->n_audios);
+
+if (c->n_subtitles)
+c->is_init_section_common_subtitle = 
is_common_init_section_exist(c->subtitles, c->n_subtitles);
+
 /* Open the demuxer for video and audio components if available */
 for (i = 0; i < c->n_videos; i++) {
 rep = c->videos[i];
@@ -2059,9 +2083,6 @@ static int dash_read_header(AVFormatContext *s)
 ++stream_index;
 }

-if(c->n_audios)
-c->is_init_section_common_audio = 
is_common_init_section_exist(c->audios, c->n_audios);
-
 for (i = 0; i < c->n_audios; i++) {
 rep = c->audios[i];
 if (i > 0 && c->is_init_section_common_audio) {
@@ -2077,9 +2098,6 @@ static int dash_read_header(AVFormatContext *s)
 ++stream_index;
 }

-if (c->n_subtitles)
-c->is_init_section_common_subtitle = 
is_common_init_section_exist(c->subtitles, c->n_subtitles);
-
 for (i = 0; i < c->n_subtitles; i++) {
 rep = c->subtitles[i];
 if (i > 0 && c->is_init_section_common_subtitle) {
--
2.31.1.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


[FFmpeg-devel] [PATCH v3 2/3] lavf/dashdec: Multithreaded DASH initialization

2022-08-23 Thread Lukas Fellechner
This patch adds an "init-threads" option, specifying the max
number of threads to use. Multiple worker threads are spun up
to massively bring down init times.
---
 libavformat/dashdec.c | 351 +-
 1 file changed, 350 insertions(+), 1 deletion(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index e82da45e43..20f2557ea3 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -24,6 +24,7 @@
 #include "libavutil/opt.h"
 #include "libavutil/time.h"
 #include "libavutil/parseutils.h"
+#include "libavutil/thread.h"
 #include "internal.h"
 #include "avio_internal.h"
 #include "dash.h"
@@ -152,6 +153,8 @@ typedef struct DASHContext {
 int max_url_size;
 char *cenc_decryption_key;

+int init_threads;
+
 /* Flags for init section*/
 int is_init_section_common_video;
 int is_init_section_common_audio;
@@ -2033,6 +2036,331 @@ static void move_metadata(AVStream *st, const char 
*key, char **value)
 }
 }

+#if HAVE_THREADS
+
+struct work_pool_data
+{
+AVFormatContext *ctx;
+struct representation *pls;
+struct representation *common_pls;
+pthread_mutex_t *common_mutex;
+pthread_cond_t *common_condition;
+int is_common;
+int is_started;
+int result;
+};
+
+struct thread_data
+{
+pthread_t thread;
+pthread_mutex_t *mutex;
+struct work_pool_data *work_pool;
+int work_pool_size;
+int is_started;
+int has_error;
+};
+
+static void *worker_thread(void *ptr)
+{
+int ret = 0;
+int i;
+struct thread_data *thread_data = (struct thread_data*)ptr;
+struct work_pool_data *work_pool = NULL;
+struct work_pool_data *data = NULL;
+for (;;) {
+
+// get next work item unless there was an error
+pthread_mutex_lock(thread_data->mutex);
+data = NULL;
+if (!thread_data->has_error) {
+work_pool = thread_data->work_pool;
+for (i = 0; i < thread_data->work_pool_size; i++) {
+if (!work_pool->is_started) {
+data = work_pool;
+data->is_started = 1;
+break;
+}
+work_pool++;
+}
+}
+pthread_mutex_unlock(thread_data->mutex);
+
+if (!data) {
+// no more work to do
+return NULL;
+}
+
+// if we are common section provider, init and signal
+if (data->is_common) {
+data->pls->parent = data->ctx;
+ret = update_init_section(data->pls);
+if (ret < 0) {
+pthread_cond_signal(data->common_condition);
+goto end;
+}
+else
+ret = AVERROR(pthread_cond_signal(data->common_condition));
+}
+
+// if we depend on common section provider, wait for signal and copy
+if (data->common_pls) {
+ret = AVERROR(pthread_cond_wait(data->common_condition, 
data->common_mutex));
+if (ret < 0)
+goto end;
+
+if (!data->common_pls->init_sec_buf) {
+goto end;
+ret = AVERROR(EFAULT);
+}
+
+ret = copy_init_section(data->pls, data->common_pls);
+if (ret < 0)
+goto end;
+}
+
+ret = begin_open_demux_for_component(data->ctx, data->pls);
+if (ret < 0)
+goto end;
+
+end:
+data->result = ret;
+
+// notify error to other threads and exit
+if (ret < 0) {
+pthread_mutex_lock(thread_data->mutex);
+thread_data->has_error = 1;
+pthread_mutex_unlock(thread_data->mutex);
+return NULL;
+}
+}
+
+
+return NULL;
+}
+
+static void create_work_pool_data(AVFormatContext *ctx, int stream_index,
+struct representation *pls, struct representation *common_pls,
+struct work_pool_data *init_data, pthread_mutex_t *common_mutex,
+pthread_cond_t *common_condition)
+{
+init_data->ctx = ctx;
+init_data->pls = pls;
+init_data->pls->stream_index = stream_index;
+init_data->common_condition = common_condition;
+init_data->common_mutex = common_mutex;
+init_data->result = -1;
+
+if (pls == common_pls) {
+init_data->is_common = 1;
+}
+else if (common_pls) {
+init_data->common_pls = common_pls;
+}
+}
+
+static int start_thread(struct thread_data *thread_data,
+struct work_pool_data *work_pool, int work_pool_size, pthread_mutex_t 
*mutex)
+{
+int ret;
+
+thread_data->mutex = mutex;
+thread_data->work_pool = work_pool;
+thread_data->work_pool_size = work_pool_size;
+
+ret = AVERROR(pthread_create(&thread_data->thread, NULL, worker_thread, 
(void*)thread_data));
+if (ret == 0)
+thread_data->is_started = 1;
+
+return ret;
+}
+
+static int init_streams_multithreaded(AVFormatContext *s, int nstreams, int 
threads)
+{
+DASHConte

[FFmpeg-devel] [PATCH v3 3/3] lavf/dashdec: Fix indentation after multithreading

2022-08-23 Thread Lukas Fellechner
---
 libavformat/dashdec.c | 74 +--
 1 file changed, 37 insertions(+), 37 deletions(-)

diff --git a/libavformat/dashdec.c b/libavformat/dashdec.c
index 20f2557ea3..f653b9850e 100644
--- a/libavformat/dashdec.c
+++ b/libavformat/dashdec.c
@@ -2412,54 +2412,54 @@ static int dash_read_header(AVFormatContext *s)
 }
 else
 {
-/* Open the demuxer for video and audio components if available */
-for (i = 0; i < c->n_videos; i++) {
-rep = c->videos[i];
-if (i > 0 && c->is_init_section_common_video) {
-ret = copy_init_section(rep, c->videos[0]);
-if (ret < 0)
+/* Open the demuxer for video and audio components if available */
+for (i = 0; i < c->n_videos; i++) {
+rep = c->videos[i];
+if (i > 0 && c->is_init_section_common_video) {
+ret = copy_init_section(rep, c->videos[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);
+
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
+for (i = 0; i < c->n_audios; i++) {
+rep = c->audios[i];
+if (i > 0 && c->is_init_section_common_audio) {
+ret = copy_init_section(rep, c->audios[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);

-for (i = 0; i < c->n_audios; i++) {
-rep = c->audios[i];
-if (i > 0 && c->is_init_section_common_audio) {
-ret = copy_init_section(rep, c->audios[0]);
-if (ret < 0)
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
+for (i = 0; i < c->n_subtitles; i++) {
+rep = c->subtitles[i];
+if (i > 0 && c->is_init_section_common_subtitle) {
+ret = copy_init_section(rep, c->subtitles[0]);
+if (ret < 0)
+return ret;
+}
+ret = open_demux_for_component(s, rep);

-for (i = 0; i < c->n_subtitles; i++) {
-rep = c->subtitles[i];
-if (i > 0 && c->is_init_section_common_subtitle) {
-ret = copy_init_section(rep, c->subtitles[0]);
-if (ret < 0)
+if (ret)
 return ret;
+rep->stream_index = stream_index;
+++stream_index;
 }
-ret = open_demux_for_component(s, rep);

-if (ret)
-return ret;
-rep->stream_index = stream_index;
-++stream_index;
-}
-
-if (!stream_index)
-return AVERROR_INVALIDDATA;
+if (!stream_index)
+return AVERROR_INVALIDDATA;
 }

 /* Create a program */
--
2.31.1.windows.1

___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".


Re: [FFmpeg-devel] [PATCH v2] lavf/dashdec: Multithreaded DASH initialization

2022-08-23 Thread Lukas Fellechner
Gesendet: Dienstag, 23. August 2022 um 05:19 Uhr
Von: "Steven Liu" 
An: "FFmpeg development discussions and patches" 
Betreff: Re: [FFmpeg-devel] [PATCH v2] lavf/dashdec: Multithreaded DASH 
initialization
Lukas Fellechner  于2022年8月22日周一 03:27写道:
>
> I look at the new functions likes begin_open_demux_for_component and
> end_open_demux_for_component maybe can separate patch.
> maybe you can submit two patch, one is make code clarify, the other
> one support multithreads,

Good idea. I split up the patch into three parts actually. Git seems to
be pretty bad in handling indentation changes, which made the first patch
look awful, although there were only very small changes.

So I pulled out the indentation change into a separate patch.
Not sure if that is a good idea, but it makes the other two patches
much more readable.
___
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".