Hello,

I want to create a chirp signal starting with a 32 bit. I have created the 
chirp signal internally in the FPGA from 32 bit and truncate down to 12 bit. 

I have removed the original connection from GPIF_D to tx_code_d. The input of 
the 32 bit chirp signal is applied internally and the 12 bit output is then 
connected to the tx_code_d.

I not sure if this works, but I have tried uploading this .bin file to the UHD 
C++ using command prompt and generated out something and looked something like 
a chirp signal. But however, when I changed the input of my chirp signal, the 
bandwidth is always around 35 MHz. 

For example, my signal suppose to be starting from 15 MHz to 35 MHz, with a BW 
of 20 MHz. When I enter the value of - - freq = 100, I thought this center 
frequency is suppose to shift the signal to 115 to 135 MHz centered at 125 MHz. 
But what I see in the spectrum analyser is centered at 100 MHz, with a 
bandwidth of about ~ 35MHz.

Did I misinterpreted something or somewhere wrongly? 

Thanks in advance!
________________________________________
From: Derek Kozel [derek.ko...@ettus.com]
Sent: 26 April 2018 18:39
To: Yeo Jin Kuang Alvin (IA)
Cc: usrp-users@lists.ettus.com
Subject: Re: [USRP-users] B210 FPGA Code

Hello Yeo Jin Kuang Alvin,

I am not Ettus' expert in the B210 FPGA, but it would be highly unusual if 
there were arbitrary bit width changes. I believe that the GPIF bus is 16 bits 
of I and Q in parallel. The FX3 GPIF bus definition is included in the source 
and you can use Cypress's tools to look at the configuration of the bus in 
addition to the FPGA source code. There is considerable DSP implemented in the 
FPGA, including the decimation, interpolation, and frequency shifting 
operations. At minimum you would have to make changes to the UHD driver to 
remove support for those features if you bypass them.

My apologies if I've missed this in another email, but what is your goal with 
these changes?

Regards,
Derek

On Thu, Apr 26, 2018 at 10:18 AM, Yeo Jin Kuang Alvin (IA) via USRP-users 
<usrp-users@lists.ettus.com<mailto:usrp-users@lists.ettus.com>> wrote:
Hi everyone!

For the FPGA source code written for b210, I noticed that the input to the 
GPIF_D that is 32 bits, and then in went through some FIFOs up converting to 64 
bits and then down to 12 bits output (tx_codec_d).

May I know what is the purpose of up converting and then down convert again?

Will it affect anything if I remove all these and just connect GPIF_D (32 bits) 
input and take 12 bits MSB (truncation) and connect directly to tx_codec_d (12 
bits) ?

Thanks in advance!

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com<mailto:USRP-users@lists.ettus.com>
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com



_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to