Hi there, expert ffmpeg users. Back story: Samsung Gear360 camera produces 4K HEVC encoded video clips, but puts the raw input from two sensors on to one frame. The sensors are behind fisheye 190 degree glass, so the images in each frame shows up as two circles. (shperes?) Using ffmpeg, hugin, and some scripting, each image is un-distorted according to it's lens characteristics, and stitched together to make an equirectangular frame. The template does not change from frame to frame, but it does take a long time to chew through thousands of frames, and the quality loss is horrible.
tl,dr; Each pixel in the input frame end up somewhere on the output frame in a predictable way. On to the question: Rather than figure out the location of each pixel every time, it would be much more efficient to just use a displacement map and move the pixels. Actually generating such a map might be a bit of a trick. Is there any ideas from the community on how to generate and use such a map? Kind regards, -- Evert Vorster Isometrix Acquistion Superchief _______________________________________________ ffmpeg-user mailing list ffmpeg-user@ffmpeg.org http://ffmpeg.org/mailman/listinfo/ffmpeg-user To unsubscribe, visit link above, or email ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".