I thought I would join the list of people emailing the list this week about obscure issues with ancient versions of GNU Radio.
I have a flow graph that needs to operate at a variety of datarates from a few kbps to about 1 Mbps. Due to the potential for very wide frequency errors, we still have to sample at >1e6 and decimate for the lower bitrates. Toward the end of the receive chain, there are a multitude of blocks that are used for Viterbi node synchronization. I've found that the number of blocks in series (3-5), combined with the low datarates at this point in the flowgraph, lead to latencies on the order of 1-2 minutes. That is to say, once the node synchronization is accomplished, it takes 1-2 minutes to flush these blocks and get the fresh, good data through. This is measured with function probes on the state of the sync process, and BERT analysis of the demodulator output [through TCP/IP socket]. - Unfortunately, upgrading to 3.7.x isn't a viable option in the near term. But have there been any fundamental changes to the scheduler that might avoid this problem? - I've tried messing around with the output buffer size option in the flowgraph, but this seems l to have a negligible impact. - I can write some custom blocks to reduce the overall block count, but I have demonstrated that this provides a linear improvement, rather than the two-order-magnitude improvement I need. Any general advice anyone can offer? It feels like the right solution is to force small buffer sizes on the relevant blocks... -John
_______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org https://lists.gnu.org/mailman/listinfo/discuss-gnuradio