Hi, folks, I recently did some comparison test with my USRP and USRP2 boxes and got slightly different results.
Test setup - Signal source : HP signal generator (forgot the name of the equipment) - Daughter cards for USRP and USRP2 : BasicRX - Test configuration : I divided a 12.1MHz CW signal from the SigGen with a 3-dB divider and fed the outputs to both USRP and USRP2 boxes. Please find the two snapshots here. http://zoolu.co.kr/episodes/3 Both have 4MHz span and centered at 12.3MHz with fft size of 8196. Questions 1. It seems that the noise floor of USRP2 is about 10 to 12 dB lower than that of USRP while the USRP has better SNR. Which are considered relatively more important in general between a lower noise level (probably be able to detect weaker signals) and a better SNR when you plan to build a signal detect solution? 2. I can see that the USRP has a bit more flatten noise floor shape but USRP2 has less spurious signals. Are those because of different performances of different ADCs or actually the two signals fed to each boxes are different although I branched from a single source? 3. I can see a rather strong peak of 12.3MHz signal in the USRP. I guess this is out of the local oscilator signal from CORDIC in the FPGA. However, I don't see the signal in the USRP2. Is this a leakage from the FPGA to ADC for the USRP unit? If then, is there any good way to get rid of this? 4. I know that the relative signal level can be changed according to the numbers of FFT sizes. How do you guys calibrate the level? What I can think instantly is to measure a reference signal of know level (ie, 0 dBm) and put some codes to calibrate the signal level. Just wonder if there is any better method used in general. Best regards, Ilkyoung. _______________________________________________ Discuss-gnuradio mailing list Discuss-gnuradio@gnu.org http://lists.gnu.org/mailman/listinfo/discuss-gnuradio