https://bugs.kde.org/show_bug.cgi?id=475488
--- Comment #2 from gessel <ges...@blackrosetech.com> --- I'm giving that a try, cleared all unconformed and re-scanning for faces. I see that in some face tags where the metadata is polluted with low quality face data where I've confirmed (generally correctly) a face tag with low pixel counts/low contrast that using that data to identify more faces, even with 10's or 100's of high quality face tags yields a true positive rate scanning for additional tags of ~0% accuracy (1:1000 or so suggested). While this is clearly a user error, it could be of assistance to users when rebuilding the training data set to offer to clear low quality faces, either interactively or automatically based on similar criterion. Some faces are just a few dozen pixels square, noisy, blurry, or very low contrast. It would seem fairly plausible to: * Scan the entire collection for confirmed faces * Compute each confirmed face rectangle's total pixel count and compare to some threshold, offer to delete/auto delete/exclude from training any face rectangles below selected minimum threshold * pass the survivors to the image quality sorter algo and compute blur, noise, under/over exposure levels, offer to delete/auto delete/exclude from training any face rectangles below selected minimum threshold * flush recognition training data * rebuild recognition training data with good faces. * rescan collection with clean, high quality recognition data. This is independent of a sometimes necessary human guided task of ensuring face tags are not mixed up, which would also obviously confuse the algo. It would seem a useful non-destructive automating option would be to simply tag confirmed but low quality faces as not suitable for training, but that gets back to the original ask of not considering them in the first place, however in this mode I'm suggesting a new "maintenance" option for resetting/refreshing the face recognition engine. -- You are receiving this mail because: You are watching all bug changes.