On Mon, Nov 24, 2014 at 3:35 AM, Daniel Oberhoff <danieloberh...@gmail.com> wrote: > Hi, > > From what I gather the filter graph is utilizing modern multicores by > “striping” where filters support it, i.e. an image is divided in stripes > and the filter gets called in separate threads for separate stripes.
In FFmpeg we call it slice threading. > I was wondering if there is or will be pipeline multuthreading, i.e. > with a setup such as > > inout -> filter1 -> filter2 -> output > > some threads processing frame n in the output (i.e. encoding), > other threads procesing frame n+1 in filter2, others processing frame > n+2 in filter1, and yet others processing frame n+3 decoding. This > way non-parallel filters can be sped up, and diminishing returns for > too much striping can be avoided. With modern cpus scaling easily > up to 24 hardware threads I see this as neccessary to fully utilize the > hardware. We call it "frame threading." There are some decoders that already support it. > > Is this already done? Or are there plans? If it is not done, how much > more work is it? Could I help (not promising much, but I may find time, > especially as this limits us…). If the filter is independent of past and future frames then it is pretty easy (just like the "intra-only" decoders). For other ones I am not so sure. Clément knows more about it. Timothy _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel