Hello everyone,
currently I am working on a project to transmit and receive OFDM-signals with USRP B200 Hardware. For this purpose, I am creating my OFDM waveforms within Matlab, save the created IQ-data as a binary file and use the UHD example functions 'tx_samples_from_file' and 'rx_samples_to_file' to transmit the signals over a channel. The demodulation with the received samples is the done within Matlab again. As I am using different Modulation Schemes (up to 1024-QAM) I noticed that the USRP B200 Hardware has its limits for these transmissions. For my B200 I tried to characterize the TX sources of error by sending two sine-tones and looking at the spurious elements with a Spectrum Analyzer. It looks like the main signal distortion is caused by the AD9346 TX-amplifier (appearing as Third Oder Intermodulation Products in the spectrum), as DC-Offset and IQ-imbalance are corrected internally and the phase noise is rather low. I therefore tried to find a TX gain setting, to minimize this distortion while still sending with rather high power. I found that a tx gain value of 84dB for my B200 produced relatively good results so I used this value for my later performance measurement. To characterize the B200 performance, I used a test transmission setup, sending my signals with different Hardware (B200, Signal Generator, Spectrum Analyzer) over a pure SMA-cable connection with a 30dB attenuator. I then compared the EVM-values of the different Hardware sending OFDM-signals with a bandwidth of 20MHz. For my first test it turned out that the TX:B200-RX:B200 setup produced the lowest EVM-values, whereas TX:Generator-RX:Analyzer has much lower EVM values. Also sending with the B200 seemed to be the main distortion as the B200-Analyzer setup had worse EVM results than Generator-B200. For these tests I had set the sample rate of my Hardware to exactly 20MHz (--rate 20e6 within the UHD function) since from what I read about the automatic MasterClock rate, I assumed the MasterClock is automatically set to the highest integer multiple (40 MHz in this case) within the B200. But from the program output, it seems that the clock is set to exactly 20MHz. This is why I started using oversampling (e.g. 28MHz sampling rate for a 20MHz signal). The interpolation and decimation of the signal was done within Matlab. After this oversampling the EVM values for my pure B200 connection improved significantly. On the other hand the mixed hardware results, especially the connection Generator-B200 got worse using oversampling which I cannot explain. My first question: Is the internal decimation from Masterclock to SampleRate done automatically using the UHD functions 'tx_samples_from_file' and 'rx_samples_to_file'? If yes, why does an increase of my sample rate (oversampling) produce much better results if the Masterclock is already at 40MHz for my 20MHz sampling rate tests? If no, is there a straightforward approach to increase the clock rate using the UHD functions I mentioned? My second question: Does anybody have an idea why oversampling my signal only improves the performance of my B200-B200 connection but does not improve or even worsens the other HW-connections? My third question: Might I be missing any part of the B200 where I could still impove performance and do you think a high performance USRP (X3x0) with a UBX daughterboard might be capable of achieving higher Modulation Schemes? I actually looked in the datasheet and found better specs for the power amplifier of this daugherboard. Best regards, Felix
_______________________________________________ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com