Hi everyone!
For the FPGA source code written for b210, I noticed that the input to the
GPIF_D that is 32 bits, and then in went through some FIFOs up converting to 64
bits and then down to 12 bits output (tx_codec_d).
May I know what is the purpose of up converting and then down convert again?
Hello Yeo Jin Kuang Alvin,
I am not Ettus' expert in the B210 FPGA, but it would be highly unusual if
there were arbitrary bit width changes. I believe that the GPIF bus is 16
bits of I and Q in parallel. The FX3 GPIF bus definition is included in the
source and you can use Cypress's tools to look
Dear users…
We have a question regarding the 1pps time stamping. We don’t know exactly
where it is best to ask this question, so please advise if needed.:
We use a B200 board.
When we do a “sample tick reset” relative to 1pps using
virtual void set_time_next_pps(const time_spec_t &time_s
Dear users
On a B200 board how do we via the C-api utilize a higher ADC-sample rate than
output sample rate. Assume we want 16MSPS out and a sample rate of 32MHz, ie.
utilizing the AD9364 HB3, HB2, HB1 and FIR to decimate by 2. Should we just
call
virtual void set_master_clock_rate(double
On 04/26/2018 09:48 AM, Thomas via USRP-users wrote:
Dear users
On a B200 board how do we via the C-api utilize a higher ADC-sample
rate than output sample rate. Assume we want 16MSPS out and a sample
rate of 32MHz, ie. utilizing the AD9364 HB3, HB2, HB1 and FIR to
decimate by 2. Should we j
On 04/26/2018 09:31 AM, Thomas via USRP-users wrote:
Dear users…
We have a question regarding the 1pps time stamping. We don’t know
exactly where it is best to ask this question, so please advise if
needed.:
We use a B200 board.
When we do a “sample tick reset” relative to 1pps using
virt
kind writers, can you tell me how to listen to the examples on a usrp e310?
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
As a follow up to this:
I've just done a wireshark capture of a stream, using the benchmark rate @
200 MSps and the CHDR dissector included in the repositories, and I see
every data CHDR packet has a size of 1F40 (8000 bytes): is this to avoid
Ethernet fragmentation from the get-go? Should I do th