Hello, AAC audio streams offer the possibility of transmitting additional binary data with the audio data. The corresponding element is called a "data stream element" (DSE). On the new ARD satellite audio transponder on Astra 19.2E, this is used by NDR, for example, to transmit RDS (Radio Data System). The audio format used is AAC-LATM.
In the current ffmpeg git code, DSE are skipped. I have included a HexDump of the DSE data in libavcodec/aacdec_template.c (by changing skip_data_stream_element()) for testing and in the Hexdump I see exactly the data I would like to get out of the audio data stream into my application "ts2shout" with the help of ffmpeg libavcodec. If I want to implement DSE support, what is the right "ffmpeg-"way to provide this binary data embedded in the audio frames to a user application? Metadata seems to be more for "static" properties of files and it is text only. Would using "AVFrameSideData" be the right method to transfer the data to the user application? Is there a generic data type, or would I have to define a new one in AVAudioServiceType, regarding that it is possible to have more then one DSE in one audio frame? My plan would be to later pass the individual AAC frames to libavcodec for decoding (like it is described in the example in doc/examples/decode_audio.c) and possibly only evaluate the respective DSE. Regards, Carsten More about ts2shout: https://github.com/carsten-gross/ts2shout -- Carsten Gross | http://www.siski.de/~carsten/ _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".