Dear FFmpeg community.

We use ffmpeg/server as a stream source and extracts frames and audio fragment 
per every second using this command:


ffmpeg -i rtmp://localhost/live/STREAM_NAME -r 1/1 -start_number 0 %d.jpg -f 
segment -segment_time 1 -acodec pcm_s16le -ac 1 -ar 16000 -threads 0 
-start_number 1 %d.wav

Out real-time stream processing tool just pulls frames and wavs from the disk 
to send it next by the message bus.

We want to customize FFmpeg to send the frames and wavs to the ZMQ topic.

Could you give a piece of advice whether it's possible (no engineering 
obstacles)? And if we do that could we contribute with that to FFmpeg code?

Regards,

Oleksii
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to