On Sun, Oct 30, 2022 at 10:55:31PM +0000, Soft Works wrote: [...] > > I understand why. I know that it's not perfect. But it's the best > > what's achievable within the way the filter is working. > > > > But I wouldn't go that far as saying it would be "broken". I think > > the result is quite acceptable with regards to the method being > > using.
It's broken because the alpha channel in the output is really completely random. If we blend the output of that "O" PNG somewhere, it's going to be a real mess. Here is a more concrete example: if your input has some red fully transparent (00ff0000), and some red fully opaque (ffff0000) which end up in the same box, they will be averaged to a red with an alpha of 0x80, and I'm not even accounting for the weight and other colors with different transparency. That non-opaque average alpha might end up being used in area that are expected to be opaque, and in some area where it's supposed to be transparent. That's exactly what I showed with the "O" png. In addition to that problem, since you're also accounting the alpha as a weight to determine the proximity of 2 colors, you're going to select the wrong colors. For example if we want to find the closest color to an opaque green (ff00ff00), and the palette has a slightly transparent green (fa00ff00) and an opaque blue (ff0000ff), then now the algorithm will prefer the blue over the green. That explains why in addition to the alpha being random in the "O" png, the colors are also messed up. > > The patch I had submitted doesn't change the previous behavior > > without the use_alpha parameter. Yes I noticed, but unfortunately I'm reworking the color distance to work in perceptual color space, and the way that alpha is mixed up in the equation just doesn't make any sense at all and prevents me from doing these changes. Ignoring the alpha branches will make its output even more terrible. > > And when using the use_alpha parameter, the results are still > > useful in many cases - maybe not always. I don't think they're useful: they're unpredictable and very likely to produce a broken output. Would you consider FFmpeg useful if half the time the command was failing at producing a valid file? [...] > Do you think it might make sense to put more weight on the > alpha value by tripling it? So it would be weighted equally to the > RGB value? You cannot mix alpha with colors at all, they are separate domains and you need to treat them as such. From paletteuse perspective what you need to do is first choose the colors in the palette that match exactly the alpha (or at least the closest if and only there is no exact match). Then within that set, and only within that one, you'd pick the closest color. From palettegen perspective, you need to split the colors in different transparency domain (a first dimensional quantization), then quantize the colors in each quantized alpha dimension. And when you have all your quantized palettes for each level of alpha, you find an algorithm to reduce the number of transparency dimensions or the number of colors per dimension to make it fit inside a single palette. But you can't just do the alpha and the colors at the same time, it cannot work, whatever weights you choose. -- Clément B. _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".