> On 7 Feb 2023, at 11:05, Alexander Bieliaev via ffmpeg-user 
> <ffmpeg-user@ffmpeg.org> wrote:
> 
> I am processing audio chunks programmatically by using the ffmpeg library
> for C#. First I divide the input audio of .wav format into chunks of 1
> minute each (I can't process the whole audio for specific reasons), then
> prepend it's header to each chunk so it can be recognized and processed,

I don’t get it. If you speak C, and you have uncompressed Wave input, is as 
simple as opening the file, find the DATA riff and copy out X bytes (depending 
on the bit depth / sample rate)


> then I get raw PCM of each chunk and replace some parts of PCM with sine
> wave data (adding beeps) and then I transform that PCM chunks to mp3 chunks
> and write those to stream. The final process of concating audio parts is *NOT
> *performed by ffmpeg, I just write data chunks to the destination stream. I
> am facing the problem that there are noticeable transitions between 1
> minute chunks in result audio(clicks/silence/change of volume/shifting).
> How can I smooth out the start/end of each chunk so when I'm putting them
> together there are no noticeable transitions?

From experience I know that the size of -f segment for Wave does NOT result in 
a sample accurate amount of data, not sure if that is what you are facing. 
(Should not matter if there is a pure concat of the data.)
It ’should’ be seamless.

So, if I can  do this kind of stuff in pure python, why find the cause and not 
do it yourself in C?

Bouke

_______________________________________________
ffmpeg-user mailing list
ffmpeg-user@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to