Hi, I'm sorry I didn't have much time to research the intricacies of the convolutional decoder implementation, but I have a question coming more from the user perspective regarding the optimal way of using the API.
In the default implementation of the extended decoder ( https://github.com/gnuradio/gnuradio/blob/master/gr-fec/python/fec/extended_decoder.py ) the canonical way seems to be to multiply the input (which is assumed to be symbols scaled close to -1, 1 range) with 48, and then to add 128 to bring into unsigned char 128 bit per symbol range. This works fine, except it exhibits what seems like a very abrupt cliff at about 3 db Eb/N0. I had the idea that comparatively large momentary noise amplitudes at this treshhold might cause a char overflow, so I modified this a little: instead of passing the unaltered symbol into the decoder, I clipped it to (-1, 1) with a rail_ff block and multiplied with 128 in an attempt to reserve the full 128 bit range of the decoder to the most ambiguous symbols while avoiding the char overflow. This seems to cause a more gentle degradation of the BER instead of the abrupt slope from before and maybe a little improvement in BER (not verified, could be bogus). I thought that either this is a good idea and might benefit more people, or I'm ignoring some subtleties of the Viterbi algorithm and I should be corrected because I'm doing something wrong. Thanks, Adrian _______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org https://lists.gnu.org/mailman/listinfo/discuss-gnuradio