On Fri, Feb 1, 2019 at 5:11 PM Clément Bœsch <u...@pkh.me> wrote: > > On Fri, Feb 01, 2019 at 04:57:37PM +0800, myp...@gmail.com wrote: > [...] > > > > -#define WEIGHT_LUT_NBITS 9 > > > > -#define WEIGHT_LUT_SIZE (1<<WEIGHT_LUT_NBITS) > > > > +#define WEIGHT_LUT_SIZE (800000) // need to > 300 * 300 * log(255) > > > > > > So the LUT is now 3.2MB? > > > > > > Why 300? 300*300*log(255) is closer to 500 000 than 800 000 > > I just seleted a value > 300*300*log(255) (500 000 more precise) for > > this case at liberty in fact , the other option is use a dynamic > > allocation memory for weight_lut table size base on the > > max_meaningful_diff :), but maybe seems pretty obvious, I think 3M is > > not a big burden for nlmeans > > It's probably fine yes, I'm just confused at the comment: why does it > *needs* to be > 300 * 300 * log(255)? > > -- ohhh, 300 = max(s) * 10 :), max(s) = 30, this is the reason.
In fact, max size of WEIGHT_LUT_SIZE == max (max_meaningful_diff), then we can avoid use pdiff_lut_scale in nlmeans, becasue now pdiff_lut_scale == 1. :) and max( max_meaningful_diff ) = -log(-1/255.0) * h * h = log(255) * max (h) * max(h) = log(255) * max (10*s) * max(10*s) _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-devel