On Thu, Jul 30, 2020 at 01:32:47AM -0500, Sebastian Pop wrote:
> On Sat, Jul 18, 2020 at 1:35 AM Michael Niedermayer <mich...@niedermayer.cc>
> wrote:
> 
> > Multithreading support should be added in a architecture independant way
> >
> >
> Attached patch moves helper threads up from hscale to
> chr_h_scale and lum_h_scale in an architecture independent way.
> This new version of the patch improves performance
> by up to 135% on Graviton2 Arm64 and by up to 95% on Intel.
> Compared to the previous version of the patch,
> there is more uninterrupted work per thread that results
> in better performance.
> 
> Please let me know how I can improve the patch.

the thread code is duplicated between chr_h_scale & lum_h_scale, this would
become messy if its done with more functions.

HT_NUM seems always 0

indention is inconsistent

also i see a sched_yield() in there, some other form of thread sync
is probably better 

The HT_NUM sized arrays might have their elements moved into a struct
so its a single array of structs. Just thinking that might look neater


> 
> There are other functions (lum_convert and chr_convert)
> that may benefit from multi-threading.
> I have not seen these functions appearing on a hot profile.
> Is there a benchmark for those functions?

you can just put a abort() in the function you are interrested to test
then run make fate -k 
the failing tests are potential testcases

Also this swscale MT work is quite welcome, the benefit speedwise
should be substantial 

Thanks

[...]

-- 
Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

If a bugfix only changes things apparently unrelated to the bug with no
further explanation, that is a good sign that the bugfix is wrong.

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to