Re: Weird behaviour of the analog signal source (was: Re: How ensure consistency with timing signals)
Hi, > How can (or better: *should*) a fully digital signal source have phase noise? Limited precision arithmetic > Also, for 1Hz at 5MSps I always get either 5005789 or 5005790 samples > (instead of 500) ... this is fairly deterministic. That's because the signal source works with phase increments per step. So it computes how much the phase changes per sample. The precision limit of that numbers will then create precision errors. > Example 1: I pipe the output of the signal source into a file: > > https://snipboard.io/xY1JvE.jpg > > Now, I would not care too much about a constant phase shift or similar, but > it can be seen that the frequency slowly drifts (this is also seen if I just > plot them on top of each other). Frequency doesn't drift. Frequency is stable but is not exactly 1 Hz which means the phase slowly drifts. > Example 2: I extend the block diagram with blocks that should never alter the > behaviour as they are only reading samples: > [...] > However, now the saved data is distorted: > > https://snipboard.io/amyn3X.jpg Yeah, that's weird, not sure where that would come from. I looked at the code and can't see anything wrong with it, but then again I might not be looking at the code from the version you're running. Cheers, Sylvain
Re: DVB-T receiver problem (OFDM symbol acquisition)
Hi Ralf, It should be right after the channel. It performs symbol acquisition, FFT and channel equalization (as well as sampling corrections and coarse and fine frequency corrections). Please see the examples folder in gr-isdbt (for instance rx_demo.grc which also includes the gr-dtv blocks disabled, which should help you understand where to put the block). best El mar., 3 mar. 2020 a las 4:16, Ralf Gorholt () escribió: > Dear Federico, > > unfortunately I have not been able to figure out how this block has to be > used and where it has to be positioned in the flow graph. Although I know > more about DVB-T today than three months ago I am still a beginner :-) > > Perhaps it would help me if you could give me some hints. > > Thank you very much and kind regards, > > Ralf, DL5EU > > *Gesendet:* Montag, 02. März 2020 um 13:47 Uhr > *Von:* "Federico 'Larroca' La Rocca" > *An:* "Ron Economos" > *Cc:* "GNURadio Discussion List" > *Betreff:* Re: DVB-T receiver problem (OFDM symbol acquisition) > Hi, > Although I don't have an answer to Ralf's question, if the sampling rate > seems to be a problem, gr-isdbt's OFDM Synchronization block (with > "Interpolate" set to "yes") corrects the sampling rate; see > https://iie.fing.edu.uy/publicaciones/2016/LFGGB16/ for an explanation of > the algorithm (although it expects different tags and parameters, so that's > probably why it won't work out of the box with the rest of the gr-dtv > blocks or a DVB-T transmission). As Ron wrote, the rest of the block is > mostly the same as in gr-dtv. > best > Federico > > El lun., 2 mar. 2020 a las 9:44, Ron Economos () > escribió: > >> Did you read the README.md of gr-dvbt? It says: >> >> *Late note: As of 2015, I donated the gr-dvbt project to gnuradio. It is >> now integrated in the mainline of gnuradio/gr-dtv.* >> >> The OFDM symbol acquisition block in gr-dtv has been upgraded with the >> fixes from the ISDBT team. See this commit: >> >> >> https://github.com/gnuradio/gnuradio/commit/761b62d4660a121c78b6a7ad17fd7b08badcbb88#diff-aa5858d955a31c6be8746db56ea13c6a >> >> However, there is still an issue with the block. It can't tolerate a >> sample rate difference between the transmitter and receiver. If the >> difference is large, the block will fail fairly quickly (minutes). >> >> I have a test OFDM acquisition block that prints out the drift. It can be >> found here: >> >> https://github.com/drmpeg/gr-dvbtx >> >> You may have to go back one commit to make it compile with the latest >> version of GNU Radio 3.7. >> >> Ron >> On 3/2/20 03:58, Ralf Gorholt wrote: >> >> Dear all, >> >> please apologize my long email but I cannot explain my problem and what I >> have done so far in three words. >> >> I am currently working on a DVB-T receiver project to receive >> transmissions on 434 MHz with 2 MHz bandwidth or less using GNU Radio and >> an RTL-SDR stick. The flow graph is based on the examples in gr-dvbt. The >> transmitter is a HiDes model HV320E. Reception works, but the video stops >> after one minute or so while the constellation diagram is still active >> (dots are moving). I am no expert and have only few knowledge of DSP and >> DVB yet, but to me the problem seems to be in the OFDM symbol acquisition >> block. >> >> Conditions: >> Linux Mint 19.3 in a virtual machine (VMware) >> GNU Radio 3.7.11 if I am right (I would need to check at home) >> >> What I have done so far: >> >> I have created a flow graph for a DVB-T receiver based on the examples in >> gr-dvbt. The signal source is an RTL-SDR stick and the sample rate is 16/7 >> MHz (= 2.285714 MHz) to get 2 MHz bandwidth. The signal sink is a TCP >> server sink to which I connect with VLC media player to display the >> received video transport stream. This works, but after one minute or so the >> video stops while the constellation diagram is still active (dots are >> moving). I am currently using QPSK, code rate 3/4 and guard interval 1/8 >> but I have also tried 16QAM, code rate 1/2 and guard interval 1/8 and have >> the same problem. >> >> To track down the source of the problem, I have created a file outside of >> GNU Radio that I can use in a file source instead of the RTL-SDR source of >> my flow graph. This allows me to make tests with an input signal that does >> not change between different tests. >> >> The data of this file are sent to the OFDM symbol acquisition block and >> the output of this block is stored in a second file. When the input signal, >> the algorithm and the parameters do not change between different tests I >> expect the generated file to be always the same. However, this is not the >> case. The files that are generated with the output of the OFDM symbol >> acquisition block differ from each other. But when I send the content of >> one of those generated files to the rest of the receiver and do this >> several times, I always get the same transport stream at the output. I have >> verified this by using a file sink and comparing the files. That’s
Aw: Re: DVB-T receiver problem (OFDM symbol acquisition)
Thank you very much, Federico. If I understand correctly, this block replaces the OFDM symbol acquisition block. I will have a look at the examples and try it. Best regards, Ralf, DL5EU Gesendet: Dienstag, 03. März 2020 um 13:05 Uhr Von: "Federico 'Larroca' La Rocca" An: "Ralf Gorholt" Cc: "GNURadio Discussion List" Betreff: Re: DVB-T receiver problem (OFDM symbol acquisition) Hi Ralf, It should be right after the channel. It performs symbol acquisition, FFT and channel equalization (as well as sampling corrections and coarse and fine frequency corrections). Please see the examples folder in gr-isdbt (for instance rx_demo.grc which also includes the gr-dtv blocks disabled, which should help you understand where to put the block). best El mar., 3 mar. 2020 a las 4:16, Ralf Gorholt () escribió: Dear Federico, unfortunately I have not been able to figure out how this block has to be used and where it has to be positioned in the flow graph. Although I know more about DVB-T today than three months ago I am still a beginner :-) Perhaps it would help me if you could give me some hints. Thank you very much and kind regards, Ralf, DL5EU Gesendet: Montag, 02. März 2020 um 13:47 Uhr Von: "Federico 'Larroca' La Rocca" An: "Ron Economos" Cc: "GNURadio Discussion List" Betreff: Re: DVB-T receiver problem (OFDM symbol acquisition) Hi, Although I don't have an answer to Ralf's question, if the sampling rate seems to be a problem, gr-isdbt's OFDM Synchronization block (with "Interpolate" set to "yes") corrects the sampling rate; see https://iie.fing.edu.uy/publicaciones/2016/LFGGB16/ for an explanation of the algorithm (although it expects different tags and parameters, so that's probably why it won't work out of the box with the rest of the gr-dtv blocks or a DVB-T transmission). As Ron wrote, the rest of the block is mostly the same as in gr-dtv. best Federico El lun., 2 mar. 2020 a las 9:44, Ron Economos ( ) escribió: Did you read the README.md of gr-dvbt? It says: Late note: As of 2015, I donated the gr-dvbt project to gnuradio. It is now integrated in the mainline of gnuradio/gr-dtv. The OFDM symbol acquisition block in gr-dtv has been upgraded with the fixes from the ISDBT team. See this commit: https://github.com/gnuradio/gnuradio/commit/761b62d4660a121c78b6a7ad17fd7b08badcbb88#diff-aa5858d955a31c6be8746db56ea13c6a However, there is still an issue with the block. It can't tolerate a sample rate difference between the transmitter and receiver. If the difference is large, the block will fail fairly quickly (minutes). I have a test OFDM acquisition block that prints out the drift. It can be found here: https://github.com/drmpeg/gr-dvbtx You may have to go back one commit to make it compile with the latest version of GNU Radio 3.7. Ron On 3/2/20 03:58, Ralf Gorholt wrote: Dear all, please apologize my long email but I cannot explain my problem and what I have done so far in three words. I am currently working on a DVB-T receiver project to receive transmissions on 434 MHz with 2 MHz bandwidth or less using GNU Radio and an RTL-SDR stick. The flow graph is based on the examples in gr-dvbt. The transmitter is a HiDes model HV320E. Reception works, but the video stops after one minute or so while the constellation diagram is still active (dots are moving). I am no expert and have only few knowledge of DSP and DVB yet, but to me the problem seems to be in the OFDM symbol acquisition block. Conditions: Linux Mint 19.3 in a virtual machine (VMware) GNU Radio 3.7.11 if I am right (I would need to check at home) What I have done so far: I have created a flow graph for a DVB-T receiver based on the examples in gr-dvbt. The signal source is an RTL-SDR stick and the sample rate is 16/7 MHz (= 2.285714 MHz) to get 2 MHz bandwidth. The signal sink is a TCP server sink to which I connect with VLC media player to display the received video transport stream. This works, but after one minute or so the video stops while the constellation diagram is still active (dots are moving). I am currently using QPSK, code rate 3/4 and guard interval 1/8 but I have also tried 16QAM, code rate 1/2 and guard interval 1/8 and have the same problem. To track down the source of the problem, I have created a file outside of GNU Radio that I can use in a file source instead of the RTL-SDR source of my flow graph. This allows me to make tests with an input signal that does not change between different tests. The data of this file are sent to the OFDM symbol acquisition block and the output of this block is stored in a second file. When the input signal, the algorithm and the parameters do not change between different tests I expect the generated file to be always the same. However, this is not the case. The files that are generated with the output of the OFDM symb
Re: Inject "tx_time" tags in a stream
Hi Derek, Thank you very much for your detailed response. I thought that tag's insertion could be done directly from GRC. I hoped that there would be some library to insert these tags in a stream. The truth is that I don't have enough knowledge to write my own block. On the other hand, I think my problem (which is essentially a burst stream synchronized with a trigger signal) is conceptually very simple and must have been encountered by many more people before. I've searched the whole forum and it's not like that. I am also surprised that no one else has answered my question. Regards, José Hi Jose, You'll have to write a block to add those tags where you want them in the stream (at the start of each burst) and with the time you want them at (the time is relative to the USRP's internal timekeeper clock). You'll also need to read from or set the time on that clock at the start of the application. The first part you can do with an embedded Python Block to add those tags. The second part you'll either need to modify the generated Python flowgraph or use the very latest 3.9 development code which added the Python Snippets block. https://github.com/gnuradio/gnuradio/pull/3169 Ettus Research's UHD manual includes a helpful page on synchronization and how to read/set the time. https://files.ettus.com/manual/page_sync.html Regards, Derek On 27/02/2020 19:33, jnu...@uvigo.es wrote: Has anyone worked with tx_time tags? jnu...@uvigo.es escribió: Hi, all! I am working on a proof of concept of a radar system for my doctoral thesis. I recently installed the great gr-pdu and gr-timing utilities from Sandia Labs and I am trying to develop a little application with GNURadio Companion (GRC), based on a USRP N200 system. I essentially want to use the rising edges of a pulsed signal to generate another signal. For this purpose, first the trigger signal is sampled with a USRP-Source block, then is threshold detected by means of a "Threshold trigger" block and finally, a message with a "Tag message trigger" block is generated. The output of this last block feeds a "PDU to Bursts" block which enters a USRP-Sink. I have checked with a "Tag Debug" that the threshold detection and message generation are both working fine. At the output of the "PDU to Bursts" block, each burst is tagged with the "tx_sob" and "tx_eob" tags. The problem arises when those burts enter the USRP-Sink. As far as I've read, this blocks transmits all burts sequentially ignoring the time separation that should be between them. Please, take into account that the trigger pulses are not equally separated in time, so I must transmit one burst with each received pulse. I think the issue is that no "tx_time" tags are being generated and added to each burst. I wonder if anyone have the same explanation for this issue, and hopefully, if anyone can guide me on how can I inject those "tx_time" tags properly in my stream with GRC. Any help will be appreciated! Regards, José
Channel estimation OFDM saving taps
Hello. Thank you in advance for the help... I already searched the mailing list and could not find it how to properly see/save the channel taps from the OFDM Channel Estimation block. I saw that is in a tag, but how can I save it? What I'd love to find is a way to see the channel estimation with the 52 taps in a QT GUI plot. Is that possible? If it's not, saving it it's already awesome. Thank you!!
Re: Weird behaviour of the analog signal source (was: Re: How ensure consistency with timing signals)
Hi Sylvain, Thank you very much for your answer. Much appreciated! > Von: "Sylvain Munaut" <246...@gmail.com> > > Hi, > > > How can (or better: *should*) a fully digital signal source have phase > > noise? > > Limited precision arithmetic > > > > Also, for 1Hz at 5MSps I always get either 5005789 or 5005790 samples > > (instead of 500) ... this is fairly deterministic. > > That's because the signal source works with phase increments per step. > So it computes how much the phase changes per sample. The precision > limit of that numbers will then create precision errors. > > > Example 1: I pipe the output of the signal source into a file: > > > > https://snipboard.io/xY1JvE.jpg > > > > Now, I would not care too much about a constant phase shift or similar, but > > it can be seen that the frequency slowly drifts (this is also seen if I > > just plot them on top of each other). > > Frequency doesn't drift. > Frequency is stable but is not exactly 1 Hz which means the phase slowly > drifts. Ok, that explains a lot: We are facing constant a frequency offset. I can reproduce the FFT shapes from gr exported data in MATLAB fairly accurately by creating for example: f = 1; fs = 5e6; data_gr = read_float_binary('baszmeg1.dat'); t = 0:1/fs:(length(data)-1)/fs; data_ideal = sin(2*pi*t*f)'; data_nonideal = sin(2*pi*t*1.0012*f)'; data_gr and data_nonideal matches well. I am by no means an expert on this but just for my understanding I would be curious: 1.) I still do not understand why for 1 Hz at 5MSps I can get a period that's "500578.5" on average. The frequency error is a whopping 0.1158%! ((5005789.5-500)/500*100). Huge. 2.) Why is it implemented that way? Why does Simulink, ADS, Cadence Spectre et.al. provide correct and accurate results? 3.) What would you do if you would want to create precise timing signals? Is a custom block really the only way? And then, how would you implement it? > > Example 2: I extend the block diagram with blocks that should never alter > > the behaviour as they are only reading samples: > > [...] > > However, now the saved data is distorted: > > > > https://snipboard.io/amyn3X.jpg > > Yeah, that's weird, not sure where that would come from. > I looked at the code and can't see anything wrong with it, but then > again I might not be looking at the code from the version you're > running. Two additional weird observations (block diagram https://snipboard.io/W6kyF0.jpg): 1.) If I remove the second signal source, Complex to Float and Controller I still see these spikes appearing in the "QT GUI Time Sink". However, they do not show up in the file exported by "File Sink". This is contradicting because they clearly process the same signal. 2.) If I put all these blocks back in again, I see the weird distorted waveform ALSO in the exported file (not only in the QT GUI Time Sink). It is interesting that it requires "Controller", the second signal source to be enabled as well as QT GUI Time Sink displaying all signals at the same time. If any of these conditions are not met, the signal is distorted only in the QT Time Sink but not in the exported file. I know that it is impossible to give a solution to this remotely but this is so weird that don't even know where to start looking. If it were you, how would you approach debugging this? Thanks, Lukas
Re: Weird behaviour of the analog signal source (was: Re: How ensure consistency with timing signals)
Hi, > I am by no means an expert on this but just for my understanding I would be > curious: > > 1.) I still do not understand why for 1 Hz at 5MSps I can get a period that's > "500578.5" on average. The frequency error is a whopping 0.1158%! > ((5005789.5-500)/500*100). Huge. That's because you're a rather extreme case. The ratio between your frequency and the sample rate is pretty large. The precision of the block would be somewhere around 0.25 ppb of the _sample_rate_ which is not that bad, but in case of huge oversampling obviously that can become significant ... It's just a use case this block was not really designed for. > 2.) Why is it implemented that way? Why does Simulink, ADS, Cadence Spectre > et.al. provide correct and accurate results? Phase increment is a pretty common way to implement digital oscillator. The advantage is that it's pretty fast, the state variable is limited (you just have the phase) and whatever error you have is constant and time-invariant. > 3.) What would you do if you would want to create precise timing signals? Is > a custom block really the only way? And then, how would you implement it? Yes, custom block is the only way. And really for _anything_ where you depend on precision, you _must_ be 100% in control of the arithmetic and how you deal with precision. And GR never guarantees implementation so using any default block that does math and the error might change from one version to the next. So IMHO any "metrology" type thing must re-implement every math operation and control how they deal with errors. For the cases where you have huge oversampling handling it the other way around would probably make more sense. You just keep "how many samples per cycle". Then you keep both an integer counter of how many sample you are in this cycle and a fractional offset and generate the sin/cos values based on that. This would probably perform worse for low oversampling signals (like if you generated a 1 MHz sine sampled at 5 Msps), but for your case, it will be better (actually given you're an exact divider, it would be perfect with no frequency error and as little phase noise as permissible with double precision). > I know that it is impossible to give a solution to this remotely but this is > so weird that don't even know where to start looking. > > If it were you, how would you approach debugging this? Look at the code ... at no point did you state which git commit you're working of so can't help there, but that's the only source of truth. Randomly adding block is useless. The only thing is points toward is that depending on how the block gets scheduled (i.e. how many samples it generates at each call to work()) changes the result, which should not happen. Cheers, Sylvain
Re: DVB-T receiver problem (OFDM symbol acquisition)
Hi Federico, I think I have sent my previous email a bit too fast. After having looked at the code of the OFDM Synchronization block it seems to me that this block is specific to ISDB-T. This would mean that I cannot use it for DVB-T because of incompatibilities between ISDB-T and DVB-T (e.g. carrier positions). Kind regards, Ralf Am 03.03.2020 um 13:05 schrieb Federico 'Larroca' La Rocca: Hi Ralf, It should be right after the channel. It performs symbol acquisition, FFT and channel equalization (as well as sampling corrections and coarse and fine frequency corrections). Please see the examples folder in gr-isdbt (for instance rx_demo.grc which also includes the gr-dtv blocks disabled, which should help you understand where to put the block). best El mar., 3 mar. 2020 a las 4:16, Ralf Gorholt (mailto:ralf.gorh...@gmx.de>>) escribió: Dear Federico, unfortunately I have not been able to figure out how this block has to be used and where it has to be positioned in the flow graph. Although I know more about DVB-T today than three months ago I am still a beginner :-) Perhaps it would help me if you could give me some hints. Thank you very much and kind regards, Ralf, DL5EU *Gesendet:* Montag, 02. März 2020 um 13:47 Uhr *Von:* "Federico 'Larroca' La Rocca" mailto:flarr...@gmail.com>> *An:* "Ron Economos" mailto:w...@comcast.net>> *Cc:* "GNURadio Discussion List" mailto:discuss-gnuradio@gnu.org>> *Betreff:* Re: DVB-T receiver problem (OFDM symbol acquisition) Hi, Although I don't have an answer to Ralf's question, if the sampling rate seems to be a problem, gr-isdbt's OFDM Synchronization block (with "Interpolate" set to "yes") corrects the sampling rate; see https://iie.fing.edu.uy/publicaciones/2016/LFGGB16/ for an explanation of the algorithm (although it expects different tags and parameters, so that's probably why it won't work out of the box with the rest of the gr-dtv blocks or a DVB-T transmission). As Ron wrote, the rest of the block is mostly the same as in gr-dtv. best Federico El lun., 2 mar. 2020 a las 9:44, Ron Economos (mailto:w...@comcast.net>>) escribió: Did you read the README.md of gr-dvbt? It says: /Late note: As of 2015, I donated the gr-dvbt project to gnuradio. It is now integrated in the mainline of gnuradio/gr-dtv./ The OFDM symbol acquisition block in gr-dtv has been upgraded with the fixes from the ISDBT team. See this commit: https://github.com/gnuradio/gnuradio/commit/761b62d4660a121c78b6a7ad17fd7b08badcbb88#diff-aa5858d955a31c6be8746db56ea13c6a However, there is still an issue with the block. It can't tolerate a sample rate difference between the transmitter and receiver. If the difference is large, the block will fail fairly quickly (minutes). I have a test OFDM acquisition block that prints out the drift. It can be found here: https://github.com/drmpeg/gr-dvbtx You may have to go back one commit to make it compile with the latest version of GNU Radio 3.7. Ron On 3/2/20 03:58, Ralf Gorholt wrote: Dear all, please apologize my long email but I cannot explain my problem and what I have done so far in three words. I am currently working on a DVB-T receiver project to receive transmissions on 434 MHz with 2 MHz bandwidth or less using GNU Radio and an RTL-SDR stick. The flow graph is based on the examples in gr-dvbt. The transmitter is a HiDes model HV320E. Reception works, but the video stops after one minute or so while the constellation diagram is still active (dots are moving). I am no expert and have only few knowledge of DSP and DVB yet, but to me the problem seems to be in the OFDM symbol acquisition block. Conditions: Linux Mint 19.3 in a virtual machine (VMware) GNU Radio 3.7.11 if I am right (I would need to check at home) What I have done so far: I have created a flow graph for a DVB-T receiver based on the examples in gr-dvbt. The signal source is an RTL-SDR stick and the sample rate is 16/7 MHz (= 2.285714 MHz) to get 2 MHz bandwidth. The signal sink is a TCP server sink to which I connect with VLC media player to display the received video transport stream. This works, but after one minute or so the video stops while the constellation diagram is still active (dots are moving). I am currently using QPSK, code rate 3/4 and guard interval 1/8 but I have also tried 16QAM, code rate 1/2 and guard interval 1/8 and have the same problem. To track down the source of the problem, I
Re: Channel estimation OFDM saving taps
Hi Vinicius, you can directly feed the output of the OFDM channel estimation block into a simple general block that you write yourself[1] – even in Python – and make that block * either save the contents of the relevant tags to a file or * output a vector of (inverse) channel coefficients taken from the `ofdm_sync_eq_taps` tag. Generally, however, never forget that OFDM channel estimations are *not* a property of the _channel_, but of the _receiver_: If you recall the effects of having a cyclic prefix in OFDM, you'll remember that it's not critical that your receiver is in perfect time synchronity with the beginning of the actual payload part of the symbol (after the CP). Instead, as long as your FFT starts *somewhere* in the cyclic prefix, you're fine synchronization-wise, since a (simulatedly) circular time shift (due to starting the FFT in the CP instead of exactly at the start of symbol after the CP) is just a point-wise multiplication with a complex sinusoid after the FFT – and that means your timing offset will just manifest as rotation of the channel coefficients. And you're correcting these, anyway. So, as long as a CP-OFDM receiver is consistent in the amount of time it starts doing the FFT before the end of CP, the rotation each subcarrier coefficient experiences is always the same and will thus be corrected (e.g. through pilot estimation). But: That means that your channel estimate is not the same you'd get when you're even a fraction of a sample off in timing compared to someone else. It's still the same, ignoring complex rotation, so the power delay profile will be right, and when you agree on e.g. an average phase of symbols, you can compare different channel estimates, but don't directly compare the channel estimates from different receivers, or the same receiver over more than a frame, or after tuning. Best regards, Marcus [1] https://tutorials.gnuradio.org PS: long-term observers will know that I fought very hard to not embed loads of LaTeX in this reply. On Tue, 2020-03-03 at 16:39 +0100, Vinicius Mesquita wrote: > Hello. > Thank you in advance for the help... > > I already searched the mailing list and could not find it how to properly > see/save the channel taps from the OFDM Channel Estimation block. I saw that > is in a tag, but how can I save it? > > What I'd love to find is a way to see the channel estimation with the 52 taps > in a QT GUI plot. Is that possible? If it's not, saving it it's already > awesome. > > Thank you!! smime.p7s Description: S/MIME cryptographic signature
Re: DVB-T receiver problem (OFDM symbol acquisition)
Hi Ron, when I adapt the parameters of my flow graph I get exactly the same result with your file as you do. Kind regards, Ralf Am 03.03.2020 um 02:22 schrieb Ron Economos: Yes, changing back to uint16_t is correct. But something is not correct with your file. When you read from a file, sample rate doesn't matter. You shouldn't see any drift at all. I have a test file on my website. It can be downloaded at: http://www.w6rz.net/adv16.cfile It's 934,551,552 bytes It's meant to be run with the default test flow graph, dvbt_rx_8k.grc (8K, 2/3, 16QAM, 1/32 GI). On my setup, I get the following from the debug block every time. OFDM Symbol Acquisition Restarted! Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 Cyclic Prefix position = 12671 >>> Done That file was created by just using the transmitter test flow graph (dvbt_tx_8k.grc) and writing to a file instead of transmitting over the air. Ron On 3/2/20 12:45, Ralf Gorholt wrote: Dear Ron, the fixes seem to be included in the version that I have installed (3.7.11-10). I have tried to build your debug block but I get an error message concerning a type cast: ralfg@vm5:~/GNU Radio/src/gr-dvbtx-master/build$ make [ 5%] Building CXX object lib/CMakeFiles/gnuradio-dvbtx.dir/dvbtx_ofdm_sym_acquisition_impl.cc.o /home/ralfg/GNU Radio/src/gr-dvbtx-master/lib/dvbtx_ofdm_sym_acquisition_impl.cc: In member function ‘int gr::dvbtx::dvbtx_ofdm_sym_acquisition_impl::peak_detect_process(const float*, int, int*, int*)’: /home/ralfg/GNU Radio/src/gr-dvbtx-master/lib/dvbtx_ofdm_sym_acquisition_impl.cc:54:68: error: cannot convert ‘uint32_t* {aka unsigned int*}’ to ‘uint16_t* {aka short unsigned int*}’ in argument passing volk_32f_index_max_16u(&peak_index, &datain[0], datain_length); To be able to compile the module, I have changed the type of peak_index to uint16_t, hoping this is correct. When I run my flow graph on a live video, I can see that the position of the cyclic prefix changes constantly. The same happens when I take the data from the file as input (which I think is normal, because the signal has been sampled with the same sample rate). However, given that the data in the file don't change, I do not understand why two consecutive runs on the same data detect different positions of the cyclic prefix: 1st run: OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! Cyclic Prefix position = 3963 Cyclic Prefix position = 3978 Cyclic Prefix position = 3993 Cyclic Prefix position = 4009 Cyclic Prefix position = 4026 [... deleted ...] Cyclic Prefix position = 4597 Cyclic Prefix position overflow! OFDM Symbol Acquisition Restarted! Cyclic Prefix position = 3460 2nd run: OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! OFDM Symbol Acquisition Restarted! Cyclic Prefix position = 3963 Cyclic Prefix position = 3979 Cyclic Prefix position = 3995 Cyclic Prefix position = 4010 Cyclic Prefix position = 4025 [... deleted ...] Cyclic Prefix position = 4595 Cyclic Prefix position overflow! OFDM Symbol Acquisition Restarted! Cyclic Prefix position = 3458 At the beginning of both runs the position of the cyclic prefix is 3963. This means, that in both cases the first symbol is detected at the same position. How is it possible that the position of the following symbols differ (e.g. 3978 vs. 3979)? All I did was stopping and restarting the flow graph. As everything in the block is pure and well defined mathematics, when the conditions (data, parameters) do not change, to my understanding the result should always be the same. I may be wrong, but in this case I would like to understand why. Do you have an explanation for me? Thank you very much for your help. Kind regards, Ralf, DL5EU Am 02.03.2020 um 13:42 schrieb Ron Economos: Did you read the README.md of gr-dvbt? It says: /Late note: As of 2015, I donated the gr-dvbt project to gnuradio. It is now integrated in the mainline of gnuradio/gr-dtv./ The OFDM symbol acquisition block in gr-dtv has been upgraded with the fixes from the ISDBT team. See this commit: https://github.com/gnuradio/gnuradio/commit/761b62d4660a121c78b6a7ad17fd7b08badcbb88#diff-aa5858d955a31c6be8746db56ea13c6a However, there is still an issue with the block. It can't tolerate a sample rate difference between the transmitter and receiver. If the difference is large, the block will fail fairly quickly (minutes). I have a test OFDM acquisition block that prints out the
Wire shark output question
Dear all, I am currently working on IEEE802.15.4 protocol with GNU Radio and different SDR. I am using Dr. Basti's code for testing. Its successful. But i am having few questions to ask. I see two connections to Wireshark from PHY layer "rxout" pin and also from pdu out of MAC layer. When i disable PHY Layer wire that goes to wireshark, i am still receiving the packets in wireshark like usual. But if i disable pdu out to wireshark wire , i am not receiving the packets. pdu output pin is from app in from Rime. So the output going from the pdu out to wireshark is before modulation ?isn't it? or is the packet from pdu out to wireshark is after modulation and demodulation from pdu in? why is the packets still receiving on wireshark if i disable the wire that runs from rxout of PHY Layer to wireshark. Please let me know if i need to ask the question in more clear way. Looking forward for someone's help to clear this doubt Thanks and regards, Ranganathan Sampathkumar
gr-uhd: Switching DSP frequency of *RX* over stream tags in TX (USRP Sink)
Hi, I need to (synchronously) switch the DUC/DDC frequency at certain sample intervals. I previously used the message ports for that and created timed commands. This worked nicely for the RX (USRP Source) and with analog retuning. However, it turned out that this does not work for the transmitter with DUC-only retuning since the DUC/DDC does not have access to the MB clock. On the other hand, gr-uhd does not add the sample timing information needed. See our discussion in http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2020-March/061615.html. It was suggested to try stream tags and bounce this question on this mailing list. I now use stream tags with USRP Sink with the module below. Now the DSP retuning seems to work nicely but ONLY for TX. How can I re-tune RX as well? My approach with the message ports would have allowed me to do both. However, stream tags I can only use for "USRP Sink". Any suggestions are highly appreciated. Best, Lukas PS: This is the code which adds stream tags to retune DUC for USRP Sink: import numpy as np import pmt from gnuradio import gr from gnuradio import uhd class blk(gr.sync_block): def __init__(self, hop_interval_samples=1000, hop_frequencies=[ 0 1e6 ]): gr.sync_block.__init__( self, name='Controller', in_sig=[np.complex64], out_sig=[np.complex64] ) self.hop_frequencies = hop_frequencies self.hop_interval_samples = int(hop_interval_samples) # state self.next_hop = self.hop_interval_samples self.current_freq_idx = 0 def work(self, input_items, output_items): output_items[0][:] = input_items[0] window_start = self.nitems_read(0) window_end = window_start + len(input_items[0]) while(self.next_hop > window_start and self.next_hop < window_end): fcenter = self.hop_frequencies[self.current_freq_idx] key = pmt.intern("tx_command") value = pmt.make_dict() value = pmt.dict_add(value, pmt.to_pmt("lo_freq"), pmt.to_pmt(900e6)) value = pmt.dict_add(value, pmt.to_pmt("dsp_freq"), pmt.to_pmt(-fcenter)) self.add_item_tag(0, self.next_hop, key, value) self.next_hop = self.next_hop + self.hop_interval_samples self.current_freq_idx = (self.current_freq_idx + 1) % len(self.hop_frequencies) return len(output_items[0])