Mar 15, 2021, 22:02 by fnou...@gmail.com: > It's actually closer to normal yuv than ycocg. If you look at the > coefficients of normal yuv > r = y + 1.14v > g = y - 0.39u - 0.58v > b = y + 2.03u > > ycocg > r = y + co - cg > g = y + cg > b = y - co - cg > > the format used in actimagine > r = y + 2v > g = y - 0.5u - v > b = y + 2u > > You can see it's more like yuv than ycocg. That's also why currently the > decoded colors still look "alright". I think it wouldn't be a good idea to > use converted ref frames and then convert back as it would likely introduce > errors. But like you are saying, this coded is as far as I know, never used > for large frame sizes, so it shouldn't really be an issue to have an extra > frame and it prevents other problems. >
Right, I couldn't remember the YUV formula offhand, and the shifts made it look like YCoCg. In that case, you can generate the magic constants to multiply, and then shift the v and u components. For such small numbers you can even do it by hand, for example 1.14/2 = 0.57 = (584 * v) >> 10. You'll want to go as high as you can when you multiply without overflowing, then shift down to decrease rounding error (it always rounds to 0). _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".