[USRP-users] Re: RFNoC Replay module (DRAM to PC)

2024-10-29 Thread hui cj
The answer is yes.

You can refer to uhd/host/examples/python/replay_capture.py for more detail.

It’s the play-and-record example from Ettus, using the Replay Module.


rubenthill--- via USRP-users  于2024年10月29日周二
21:50写道:

> Hello Hui could you solve this Issue? I am trying sth very similar. Since
> the hints you got from wade, more precisely the two examples you were
> pointed at are python and C++ my question is, can this be done using python
> only? Say transmit num_x (say 500k-2mio samples) and receive num_x + margin
> at the same time. So that Tx and Rx are triggered exactly at the same time
> (regarding the precision the USRP can give)…
>
> Since I already have a lot of Signal processing already done in Python.
> Otherwise I’ll have to figure go deeper into C++
> ___
> USRP-users mailing list -- usrp-users@lists.ettus.com
> To unsubscribe send an email to usrp-users-le...@lists.ettus.com
>
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread Marcus D. Leech

On 29/10/2024 05:43, pigatoimdeafrance...@gmail.com wrote:


Hello,

I am having trouble setting up the USRPN310. Logs of $ 
uhd_find_devices are:


ERROR: ld.so : object '/opt/uhd/lib/libuhd.so.4.4.0' 
from LD_PRELOAD cannot be preloaded (cannot open shared object file): 
ignored. [INFO] [UHD] linux; GNU C++ version 9.4.0; Boost_107100; 
UHD_4.4.0.HEAD-0-g5fac246b



- UHD Device 0

Device Address: serial: 3249D76 addr: 192.168.20.2 claimed: False 
fpga: XG mgmt_addr: 192.168.10.2 mgmt_addr: 192.168.20.2 name: 
ni-n3xx-3249D76 product: n310 type: n3xx


The command $ uhd_usrp_probe highlights issues regarding the host-USRP 
connection:


ERROR: ld.so : object '/opt/uhd/lib/libuhd.so.4.4.0' 
from LD_PRELOAD cannot be preloaded (cannot open shared object file): 
ignored. [INFO] [UHD] linux; GNU C++ version 9.4.0; Boost_107100; 
UHD_4.4.0.HEAD-0-g5fac246b [INFO] [MPMD] Initializing 1 device(s) in 
parallel with args: 
mgmt_addr=192.168.10.2,type=n3xx,product=n310,serial=3249D76,name=ni-n3xx-3249D76,fpga=XG,claimed=False,addr=192.168.20.2 
[WARNING] [MPM.RPCServer] A timeout event occured! [INFO] 
[MPM.PeriphManager] init() called with device args 
`fpga=XG,mgmt_addr=192.168.10.2,name=ni-n3xx-3249D76,product=n310,clock_source=internal,time_source=internal'. 
[ERROR] [RFNOC::MGMT] EnvironmentError: IOError: recv error on socket: 
Connection refused [ERROR] [RFNOC::GRAPH] IO Error during GSM 
initialization. EnvironmentError: IOError: recv error on socket: 
Connection refused [ERROR] [RFNOC::GRAPH] Caught exception while 
initializing graph: EnvironmentError: IOError: recv error on socket: 
Connection refused Error: RuntimeError: Failure to create rfnoc_graph.


Can anybody help me with this? Thanks.

Regards,

Francesco


___
USRP-users mailing list --usrp-users@lists.ettus.com
To unsubscribe send an email tousrp-users-le...@lists.ettus.com

Can you ping the N310 at the given address?

Is your firewall configured to allow packets to port 49152?

___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: RFNoC Replay module (DRAM to PC)

2024-10-29 Thread rubenthill--- via USRP-users
Hello Hui could you solve this Issue? I am trying sth very similar. Since the 
hints you got from wade, more precisely the two examples you were pointed at 
are python and C++ my question is, can this be done using python only? Say 
transmit num_x (say 500k-2mio samples)  and receive num_x + margin at the same 
time. So that Tx and Rx are triggered exactly at the same time (regarding the 
precision the USRP can give)… 

Since I already have a lot of Signal processing already done in Python. 
Otherwise I’ll have to figure go deeper into C++
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Drop packets and sequence errors during X410 DPDK benchmark test

2024-10-29 Thread dhpanchaai
Hi,

I’m trying to conduct the UHD benchmark test using DPDK on X410 radio. I’m 
using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the Mellanox 
drivers, and have the UC_200 fpga image loaded on the radio. However, I keep 
experiencing packet drops and sequence errors when I do that. Any idea why 
that’s happening?

/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args 
"type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1" 
--priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev "B:1" 
--tx_rate 245.76e6 --tx_subdev "B:0" 

\[INFO\] \[UHD\] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; 
UHD_4.7.0.HEAD-0-ga5ed1872

EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :01:00.0 (socket 0)

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :01:00.1 (socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

\[00:00:00.000109\] Creating the usrp device with: 
type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1...

\[INFO\] \[MPMD\] Initializing 1 device(s) in parallel with args: 
mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1

\[INFO\] \[MPM.PeriphManager\] init() called with device args 
\`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.

Using Device: Single USRP:

  Device: X400-Series Device

  Mboard 0: x410

  RX Channel: 0

RX DSP: 0

RX Dboard: B

RX Subdev: 1

  TX Channel: 0

TX DSP: 0

TX Dboard: B

TX Subdev: 0

\[00:00:01.970153754\] Setting device timestamp to 0...

\[00:00:01.971248509\] Testing receive rate 245.76 Msps on 1 channels

Setting TX spb to 1992

\[00:00:01.972147276\] Testing transmit rate 245.76 Msps on 1 channels

U\[D00:00:02.502074084\] Detected Rx sequence error.

U\[D00:00:03.501866063\] Detected Rx sequence error.

U\[D00:00:04.501965973\] Detected Rx sequence error.

U\[D00:00:05.501905705\] Detected Rx sequence error.

U\[D00:00:06.501533956\] Detected Rx sequence error.

U\[D00:00:07.501567020\] Detected Rx sequence error.

U\[D00:00:08.501554331\] Detected Rx sequence error.

U\[D00:00:09.501610267\] Detected Rx sequence error.

U\[D00:00:10.501971471\] Detected Rx sequence error.

U\[D00:00:11.501931301\] Detected Rx sequence error.

\[00:00:11.973155250\] Benchmark complete.

Benchmark rate summary:

  Num received samples: 2344330478

  Num dropped samples:  113209128

  Num overruns detected:0

  Num transmitted samples:  2335492512

  Num sequence errors (Tx): 0

  Num sequence errors (Rx): 10

  Num underruns detected:   10

  Num late commands:0

  Num timeouts (Tx):0

  Num timeouts (Rx):0

Done!
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Drop packets and sequence errors during X410 DPDK benchmark test

2024-10-29 Thread Marcus D. Leech

On 29/10/2024 19:38, dhpanch...@gmail.com wrote:


Hi,

I’m trying to conduct the UHD benchmark test using DPDK on X410 radio. 
I’m using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the 
Mellanox drivers, and have the UC_200 fpga image loaded on the radio. 
However, I keep experiencing packet drops and sequence errors when I 
do that. Any idea why that’s happening?


/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args 
"type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1" 
--priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev 
"B:1" --tx_rate 245.76e6 --tx_subdev "B:0"


[INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; 
UHD_4.7.0.HEAD-0-ga5ed1872


EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :01:00.0 
(socket 0)


EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: :01:00.1 
(socket 0)


TELEMETRY: No legacy callbacks, legacy socket not created

[00:00:00.000109] Creating the usrp device with: 
type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1...


[INFO] [MPMD] Initializing 1 device(s) in parallel with args: 
mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1


[INFO] [MPM.PeriphManager] init() called with device args 
`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.


Using Device: Single USRP:

Device: X400-Series Device

Mboard 0: x410

RX Channel: 0

RX DSP: 0

RX Dboard: B

RX Subdev: 1

TX Channel: 0

TX DSP: 0

TX Dboard: B

TX Subdev: 0

[00:00:01.970153754] Setting device timestamp to 0...

[00:00:01.971248509] Testing receive rate 245.76 Msps on 1 channels

Setting TX spb to 1992

[00:00:01.972147276] Testing transmit rate 245.76 Msps on 1 channels

U[D00:00:02.502074084] Detected Rx sequence error.

U[D00:00:03.501866063] Detected Rx sequence error.

U[D00:00:04.501965973] Detected Rx sequence error.

U[D00:00:05.501905705] Detected Rx sequence error.

U[D00:00:06.501533956] Detected Rx sequence error.

U[D00:00:07.501567020] Detected Rx sequence error.

U[D00:00:08.501554331] Detected Rx sequence error.

U[D00:00:09.501610267] Detected Rx sequence error.

U[D00:00:10.501971471] Detected Rx sequence error.

U[D00:00:11.501931301] Detected Rx sequence error.

[00:00:11.973155250] Benchmark complete.

Benchmark rate summary:

Num received samples: 2344330478

Num dropped samples: 113209128

Num overruns detected: 0

Num transmitted samples: 2335492512

Num sequence errors (Tx): 0

Num sequence errors (Rx): 10

Num underruns detected: 10

Num late commands: 0

Num timeouts (Tx): 0

Num timeouts (Rx): 0

Done!


___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com
I don't think "multi_streamer" is going to do anything for you here, 
since you're only configuring a single channel in each direction.
  I *THINK* multi_streamer will have ZERO effect, but you could try 
again without it.


Doing the math, your system is trying to move about 2Gbyte/second 
into/out-of that NIC, and it may be running out of

  bus bandwidth and/or CPU.

I assume that you've configured your CPU for "Performance" mode?

IF you cut the sample-rate in half, do you still see this problem?

___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread Marcus D. Leech

On 29/10/2024 12:53, pigatoimdeafrance...@gmail.com wrote:


Yes, both SFP+ ports are connected to the host.

Host side, the ip addresses are:


|enp3s0f0: flags=4163 mtu 9000|

|inet 192.168.20.1 netmask 255.255.255.0 broadcast 0.0.0.0|

|ether 7c:c2:55:7b:35:7e txqueuelen 1000 (Ethernet)|

|RX packets 4616 bytes 432264 (432.2 KB)|

|RX errors 0 dropped 90 overruns 0 frame 0|

|TX packets 2518 bytes 1371160 (1.3 MB)|

|TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0|


|enp3s0f1: flags=4163 mtu 9000|

|inet 192.168.10.1 netmask 255.255.255.0 broadcast 0.0.0.0|

|ether 7c:c2:55:7b:35:7f txqueuelen 1000 (Ethernet)|

|RX packets 1226 bytes 945608 (945.6 KB)|

|RX errors 0 dropped 198 overruns 0 frame 0|

|TX packets 15263 bytes 18973321 (18.9 MB)|

|TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0|


whereas N310 the interfaces are:


|sfp0 Link encap:Ethernet HWaddr 00:80:2F:34:A1:BD|

|inet addr:192.168.10.2 Bcast:192.168.10.255 Mask:255.255.255.0|

|UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1|

|RX packets:3403 errors:0 dropped:0 overruns:0 frame:0|

|TX packets:1079 errors:0 dropped:0 overruns:0 carrier:0|

|collisions:0 txqueuelen:1000|

|RX bytes:869430 (849.0 KiB) TX bytes:62857 (61.3 KiB)|

|sfp1 Link encap:Ethernet HWaddr 00:80:2F:34:A1:BE|

|inet addr:192.168.20.2 Bcast:192.168.20.255 Mask:255.255.255.0|

|UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1|

|RX packets:3775 errors:0 dropped:0 overruns:0 frame:0|

|TX packets:403 errors:0 dropped:0 overruns:0 carrier:0|

|collisions:0 txqueuelen:1000|

|RX bytes:338406 (330.4 KiB) TX bytes:603205 (589.0 KiB)|

Both 192.168.20.2 and 192.168.10.2 can be pinged correctly.


“udpv“ protocol was a typo of mine. The output of cmd |$sudo 
firewall-cmd --list-ports| is:


|49152/udp|


In the past when this has happened, it was due to swapping the cables on 
the two ports.    Check that.


If that doesn't work, trying temporarily disabling your firewall entirely.

___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread pigatoimdeafrancesco
Yes, both SFP+ ports are connected to the host. 

Host side, the ip addresses are:

`enp3s0f0: flags=4163  mtu 9000`

`inet 192.168.20.1  netmask 255.255.255.0  broadcast 0.0.0.0`

`ether 7c:c2:55:7b:35:7e  txqueuelen 1000  (Ethernet)`

`RX packets 4616  bytes 432264 (432.2 KB)`

`RX errors 0  dropped 90  overruns 0  frame 0`

`TX packets 2518  bytes 1371160 (1.3 MB)`

`TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0`

`enp3s0f1: flags=4163  mtu 9000`

`inet 192.168.10.1  netmask 255.255.255.0  broadcast 0.0.0.0`

`ether 7c:c2:55:7b:35:7f  txqueuelen 1000  (Ethernet)`

`RX packets 1226  bytes 945608 (945.6 KB)`

`RX errors 0  dropped 198  overruns 0  frame 0`

`TX packets 15263  bytes 18973321 (18.9 MB)`

`TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0`

whereas N310 the interfaces are: 

`sfp0  Link encap:Ethernet  HWaddr 00:80:2F:34:A1:BD`

`  inet addr:192.168.10.2  Bcast:192.168.10.255  Mask:255.255.255.0`

`  UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1`

`  RX packets:3403 errors:0 dropped:0 overruns:0 frame:0`

`  TX packets:1079 errors:0 dropped:0 overruns:0 carrier:0`

`  collisions:0 txqueuelen:1000`

`  RX bytes:869430 (849.0 KiB)  TX bytes:62857 (61.3 KiB)`

`sfp1  Link encap:Ethernet  HWaddr 00:80:2F:34:A1:BE`

`  inet addr:192.168.20.2  Bcast:192.168.20.255  Mask:255.255.255.0`

`  UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1`

`  RX packets:3775 errors:0 dropped:0 overruns:0 frame:0`

`  TX packets:403 errors:0 dropped:0 overruns:0 carrier:0`

`  collisions:0 txqueuelen:1000`

`  RX bytes:338406 (330.4 KiB)  TX bytes:603205 (589.0 KiB)`

Both 192.168.20.2 and 192.168.10.2 can be pinged correctly.

“udpv“ protocol was a typo of mine. The output of cmd `$sudo firewall-cmd 
--list-ports` is:

`49152/udp`
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread Marcus D. Leech

On 29/10/2024 11:42, pigatoimdeafrance...@gmail.com wrote:


Hi Marcus,

N310 can be pinged with both addresses.

I set up port 49152 and should be listed:


$sudo firewall-cmd --list-ports

49152/udpv


however, the problem still persists



___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com
WHe
When you say "both ports", do you mean that you have both SFP+ ports 
connected to your host?  Can you confirm the IP
  address configuration of your host NICs, and the addresses on the 
N310?   On the N310 with the default FPGA image,
  that supports 1GiGe on SFP0 and 10GiGe on SFP1, the addresses are 
192.168.10.2 and 192.168.20.2.


The management (RJ-45) port is DHCP enabled by default, so it will have 
whatever address was assigned by your

  network.

___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread pigatoimdeafrancesco
Hi Marcus, 

N310 can be pinged with both addresses. 

I set up port 49152 and should be listed:

$sudo firewall-cmd --list-ports

49152/udpv

however, the problem still persists
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Re: Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread Marcus D. Leech

On 29/10/2024 11:42, pigatoimdeafrance...@gmail.com wrote:


Hi Marcus,

N310 can be pinged with both addresses.

I set up port 49152 and should be listed:


$sudo firewall-cmd --list-ports

49152/udpv

I'm not that familiar with "firewalld", but a protocol of "udpv" should 
perhaps be "udp" instead?





however, the problem still persists



___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com

___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Error: RuntimeError: Failure to create rfnoc_graph on USRP N310

2024-10-29 Thread pigatoimdeafrancesco
Hello,

I am having trouble setting up the USRPN310. Logs of $ uhd_find_devices are:

ERROR: [ld.so](http://ld.so/): object '/opt/uhd/lib/libuhd.so.4.4.0' from 
LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. 
\[INFO\] \[UHD\] linux; GNU C++ version 9.4.0; Boost_107100; 
UHD_4.4.0.HEAD-0-g5fac246b

## - UHD Device 0

Device Address: serial: 3249D76 addr: 192.168.20.2 claimed: False fpga: XG 
mgmt_addr: 192.168.10.2 mgmt_addr: 192.168.20.2 name: ni-n3xx-3249D76 product: 
n310 type: n3xx

The command $ uhd_usrp_probe highlights issues regarding the host-USRP 
connection:

ERROR: [ld.so](http://ld.so/): object '/opt/uhd/lib/libuhd.so.4.4.0' from 
LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored. 
\[INFO\] \[UHD\] linux; GNU C++ version 9.4.0; Boost_107100; 
UHD_4.4.0.HEAD-0-g5fac246b \[INFO\] \[MPMD\] Initializing 1 device(s) in 
parallel with args: 
mgmt_addr=192.168.10.2,type=n3xx,product=n310,serial=3249D76,name=ni-n3xx-3249D76,fpga=XG,claimed=False,addr=192.168.20.2
 \[WARNING\] \[MPM.RPCServer\] A timeout event occured! \[INFO\] 
\[MPM.PeriphManager\] init() called with device args 
\`fpga=XG,mgmt_addr=192.168.10.2,name=ni-n3xx-3249D76,product=n310,clock_source=internal,time_source=internal'.
 \[ERROR\] \[RFNOC::MGMT\] EnvironmentError: IOError: recv error on socket: 
Connection refused \[ERROR\] \[RFNOC::GRAPH\] IO Error during GSM 
initialization. EnvironmentError: IOError: recv error on socket: Connection 
refused \[ERROR\] \[RFNOC::GRAPH\] Caught exception while initializing graph: 
EnvironmentError: IOError: recv error on socket: Connection refused Error: 
RuntimeError: Failure to create rfnoc_graph.

Can anybody help me with this? Thanks.

Regards,

Francesco___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com


[USRP-users] Using USRP X310 Front GPIO as SPI Interface (UHD Timed Commands)

2024-10-29 Thread hui cj
Hi everyone,

I built a project that demonstrates how to use the front-panel GPIO pins of
the USRP X310 to emulate an SPI interface. The SPI communication leverages
UHD timed commands and RFNoC blocks to drive GPIO signals at precise
intervals.
But there is a slow performance when I read GPIO values.
If I comment the blue part, the code can generate SCK and SDO up to 2MHz.
But if I add the function of reading gpio, the output clock is slowed down
to 100Hz.

This is weird. Anyone know why?

The code is open source at https://github.com/cjhonlyone/uhd_gpio_spi.

int UhdGpioSpi::write_and_read(uint8_t* write_buffer, uint8_t* read_buffer,
uint32_t length) {

double spi_period = 1.0 / spi_frequency;
radio_ctrl->clear_command_time(0);
uhd::time_spec_t time = radio_ctrl->get_time_now() +
uhd::time_spec_t(0.05);
time = time + uhd::time_spec_t(spi_period);
radio_ctrl->set_command_time(time, 0);
radio_ctrl->set_gpio_attr("FP0", "OUT", (~SDO_MASK) & (~SCS_MASK) &
(cpol ? 0x : (~SCK_MASK)));

for (uint32_t i = 0; i < length; i++) {
for (uint32_t j = 0; j < 8; j++) {
time = time + uhd::time_spec_t(0.5 * spi_period);
radio_ctrl->set_command_time(time, 0);
if ((cpha & cpol) || ((!cpha) & (!cpol))) { // SCK = 0, SCS = 0
radio_ctrl->set_gpio_attr("FP0", "OUT", (write_buffer[i] &
(1 << (7 - j))) ? (~SCS_MASK) & (~SCK_MASK) : ((~SCS_MASK) & (~SCK_MASK) &
(~SDO_MASK)));
} else { // SCK = 1, SCS = 0
radio_ctrl->set_gpio_attr("FP0", "OUT", (write_buffer[i] &
(1 << (7 - j))) ? (~SCS_MASK) : (~SCS_MASK) & (~SDO_MASK));
}

time = time + uhd::time_spec_t(0.5 * spi_period);
radio_ctrl->set_command_time(time, 0);
if ((cpha & cpol) || ((!cpha) & (!cpol))) { // SCK = 1, SCS = 0
radio_ctrl->set_gpio_attr("FP0", "OUT", (write_buffer[i] &
(1 << (7 - j))) ? (~SCS_MASK) : (~SCS_MASK) & (~SDO_MASK));
} else { // SCK = 0, SCS = 0
radio_ctrl->set_gpio_attr("FP0", "OUT", (write_buffer[i] &
(1 << (7 - j))) ? (~SCS_MASK) & (~SCK_MASK) : ((~SCS_MASK) & (~SCK_MASK) &
(~SDO_MASK)));
}

//radio_ctrl->set_command_time(time, 0);
//if (radio_ctrl->get_gpio_attr("FP0", "READBACK") & SDI_MASK)
//read_buffer[i] |= (1 << (7 - j));
//if (j < 7) read_buffer[i] <<= 1;
}

// nop
time = time + uhd::time_spec_t(0.5*spi_period);
radio_ctrl->set_command_time(time, 0);
radio_ctrl->set_gpio_attr("FP0", "OUT", (~SDO_MASK) & (~SCS_MASK) &
(cpol ? 0x : (~SCK_MASK)));

}

time = time + uhd::time_spec_t(spi_period);
radio_ctrl->set_command_time(time, 0);
radio_ctrl->set_gpio_attr("FP0", "OUT", SCS_MASK | (cpol ? SCK_MASK :
0x));

return 0;
}
___
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com