Hi all,

My app needs best performance video playback in high resolutions. I have a 
graphic engine polling the video frames and an audio engine polling decoded 
audio packets.
The problem is, sometimes, the audio thread needs data that are not ready, so 
it decodes all packets until it got the required audio data. In this case video 
frames are also decoded and pushed in a queue. But if it implies decoding 10 HD 
video frames, I won’t get the audio data in time and I’ll get audio issues. (I 
have a few videos in which I have 10 consecutive video frames, then audio 
content, then 10 frames ….)
What should I do in this case ? 

I see various solutions:

 1- I could put in cache more than one second of audio and video and I should 
be ok, but with 4k movies, keeping 30 frames could imply using a huge amount of 
memory and I would like to avoid that.
 2- I could open the file twice, one for video and one for audio, but I would 
use even more RAM and it would decrease performances because parsing the file 
twice
 3- I could avoid decoding video frames if I’m missing audio data but that 
would imply seeking back after to get a clean video frame (so decoding again 
from previous keyframe), so I think that’s not a option at all.
 4- Any better option ?

I hope I’m on the right mailing list.
Thanks a lot,
Kind regards,
Matt Beghin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to