On 2/9/23 13:23, Jason Roehm wrote:

On 2/9/23 10:58, Marcus D. Leech wrote:
On 09/02/2023 07:43, Jason Roehm wrote:
I have an X410 device that I'm trying to update for use with UHD v4.4.0. It hasn't been used for a while, so at least several releases ago.

I followed the procedure in the X410 manual (https://files.ettus.com/manual/page_usrp_x4xx.html#x4xx_updating_filesystems_mender) to update the filesystem. After doing so, the unit boots as expected and it looks like it's using the correct filesystem; /etc/mender/artifact_info contains:

    artifact_name=v4.4.0.0_x4xx

However, UHD-based applications aren't able to communicate with the device. uhd_find_devices does function somewhat; when I run it on the X410 itself, it returns:

    root@ni-x4xx-XXXXXX:~# uhd_find_devices
    [INFO] [UHD] linux; GNU C++ version 9.2.0; Boost_107100; UHD_4.4.0.0-0-g5fac246b
    --------------------------------------------------
    -- UHD Device 0
    --------------------------------------------------
    Device Address:
        serial:
        claimed: False
        fpga:
        mgmt_addr: 127.0.0.1
        name:
        product:
        reachable: No
        type:

However, uhd_usrp_probe fails:

    root@ni-x4xx-XXXXXX:~# uhd_usrp_probe --args addr=127.0.0.1,type=x4xx     [INFO] [UHD] linux; GNU C++ version 9.2.0; Boost_107100; UHD_4.4.0.0-0-g5fac246b
    Error: LookupError: KeyError: No devices found for ----->
    Device Address:
        addr: 127.0.0.1
        type: x4xx

I thought I might need to reload the FPGA image, so I tried to load the CG_400 image and got the following:

    root@ni-x4xx-XXXXXX:~# uhd_image_loader --args mgmt_addr=127.0.0.1,type=x4xx,fpga=CG_400     [INFO] [UHD] linux; GNU C++ version 9.2.0; Boost_107100; UHD_4.4.0.0-0-g5fac246b     No applicable UHD devices found[ERROR] [MPMD IMAGE LOADER] mpmd_image_loader only supports a single device.

I see the same results if I try to run uhd_usrp_probe from an external host using UHD v4.4.0. Since the uhg_image_loader output suggested an issue with MPM, I poked around and found that the usrp-hwd systemd unit (the MPM hardware daemon) is failing to finish startup. Its log output contains:

root@ni-x4xx-XXXXXX:~# journalctl -u usrp-hwd -f
    Feb 09 12:37:33 ni-x4xx-XXXXXX systemd[1]: Starting USRP Hardware Daemon (MPM)...     Feb 09 12:37:35 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.main] [INFO] Launching USRP/MPM, version: 4.4.0.0-g5fac246b     Feb 09 12:37:35 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.main] [INFO] Spawning RPC process...     Feb 09 12:37:35 ni-x4xx-XXXXXX systemd[1]: usrp-hwd.service: Supervising process 872 which is not our child. We'll most likely not notice when it exits.     Feb 09 12:37:35 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.main] [INFO] Spawning discovery process...     Feb 09 12:37:35 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.main] [INFO] Processes launched. Registering signal handlers.     Feb 09 12:37:35 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.PeriphManager] [INFO] Device serial number: YYYYYYY     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      metal_uio_dev_open: No IRQ for device 1000100000.usp_rf_data_converter.     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      metal_uio_dev_open: No IRQ for device 1000100000.usp_rf_data_converter.     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      metal_uio_dev_open: No IRQ for device 1000100000.usp_rf_data_converter.     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      metal_uio_dev_open: No IRQ for device 1000100000.usp_rf_data_converter.
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: Requested block not available in XRFdc_SetThresholdClrMode
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: DTC Scan T1
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC0: 000000000000000000000000000111122222000000000000000000000000000*#000000000000000000000000001111322222000000000000000000000000000     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC2: 000000000000000000000000000000000000111122222000000000000000000#00000000*0000000000000000000000000001111222220000000000000000000     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC0: Marker: - 76, 0     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC2: Marker: - 76, 4     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      SysRef period in terms of ADC T1s = 1152     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC target latency = 1228
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: DTC Scan T1
    Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC0: 0000000000000000000000000000000000011112222200000000000000000000#000000*00000000000000000000000000001111222220000000000000000000     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC1: 00000000000000000000000000000000000000011112222000000000000000000000000#000*0000000000000000000000000000111132222000000000000000     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC0: Marker: - 51, 0     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC1: Marker: - 51, 0     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      SysRef period in terms of DAC T1s = 2304     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC target latency = 800     Feb 09 12:37:39 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:     Error : DAC alignment target latency of 816 < minimum possible 816
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: DTC Scan T1
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC0: 0000000000000000000000000000111122222000000000000000000000000000*000000000000000000000000000111132222000000000000000000000000000     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC2: 0000000000000000000000000000000000001111222220000000000000000000#0000000*0000000000000000000000000000111122222000000000000000000     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC0: Marker: - 76, 0     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC2: Marker: - 76, 4     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      SysRef period in terms of ADC T1s = 1152     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      ADC target latency = 1228
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: DTC Scan T1
    Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC0: 0000000000000000000000000000000000011113222220000000000000000000#0000000*0000000000000000000000000000111122222000000000000000000     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC1: 000000000000000000000000000000000000000111132222000000000000000000000000#000*000000000000000000000000000011112222200000000000000     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC0: Marker: - 51, 0     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC1: Marker: - 51, 0     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      SysRef period in terms of DAC T1s = 2304     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: info:      DAC target latency = 800     Feb 09 12:37:40 ni-x4xx-XXXXXX usrp_hwd.py[868]: metal: error:     Error : DAC alignment target latency of 816 < minimum possible 816     Feb 09 12:39:03 ni-x4xx-XXXXXX systemd[1]: usrp-hwd.service: start operation timed out. Terminating.     Feb 09 12:39:03 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.kill] [INFO] Terminating pid: 872     Feb 09 12:39:03 ni-x4xx-XXXXXX usrp_hwd.py[868]: [MPM.kill] [INFO] Terminating pid: 873     Feb 09 12:39:03 ni-x4xx-XXXXXX systemd[1]: usrp-hwd.service: Killing process 872 (n/a) with signal SIGKILL.     Feb 09 12:39:03 ni-x4xx-XXXXXX systemd[1]: usrp-hwd.service: Failed with result 'timeout'.     Feb 09 12:39:03 ni-x4xx-XXXXXX systemd[1]: Failed to start USRP Hardware Daemon (MPM).

Anyone have any guidance on how I can resolve the MPM issue? I am fine with losing any state that is on the device in order to get it to work.

Thanks.

Jason
You could try making up a fresh SD card from the official SD card image, instead of mender.   Particularly given that you don't
  have any "state" on the device you care about.

I reflashed the eMMC with the v4.4.0 sdimg file and am seeing the same results with MPM not starting. I also noticed that I see the following series of errors reported in dmesg, repeating the same sequence every 90 seconds:

    [ 1648.867887] nixge 10000a4000.ethernet int0: Link is Down
    [ 1648.901030] fpga_manager fpga0: writing x410.bin to Xilinx ZynqMP FPGA Manager     [ 1649.141827] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /fpga-full/firmware-name     [ 1649.152371] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_clk_1     [ 1649.162405] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_clk_2     [ 1649.172427] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/rf_data_converter     [ 1649.183059] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/rfdc_regs     [ 1649.192999] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_clk_3     [ 1649.203024] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/nixge_internal     [ 1649.213396] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/nixge0     [ 1649.223081] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/nixge0_1     [ 1649.232934] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/nixge0_2     [ 1649.242785] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/nixge0_3     [ 1649.252637] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_enet_regs_0     [ 1649.263188] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_enet_regs_0_1     [ 1649.273912] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_enet_regs_0_2     [ 1649.284638] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /__symbols__/misc_enet_regs_0_3     [ 1649.313566] nixge 10000a4000.ethernet (unnamed net_device) (uninitialized): fixed link full duplex 1000Mbps not recognised
    [ 1649.328459] nixge 10000a4000.ethernet int0: renamed from eth1
    [ 1649.349974] nixge 1200000000.ethernet sfp0: renamed from eth2
    [ 1649.366658] nixge 1200030000.ethernet sfp0_3: renamed from eth4
    [ 1649.377408] nixge 1200010000.ethernet sfp0_1: renamed from eth1
    [ 1649.395996] nixge 1200020000.ethernet sfp0_2: renamed from eth3
    [ 1649.407697] nixge 10000a4000.ethernet int0: configuring for fixed/internal link mode     [ 1649.415533] nixge 10000a4000.ethernet int0: Link is Up - 1Gbps/Full - flow control off     [ 1649.418966] nixge 1200000000.ethernet sfp0: configuring for fixed/xgmii link mode     [ 1649.462114] nixge 1200030000.ethernet sfp0_3: configuring for fixed/xgmii link mode     [ 1649.471452] nixge 1200010000.ethernet sfp0_1: configuring for fixed/xgmii link mode     [ 1649.480753] nixge 1200020000.ethernet sfp0_2: configuring for fixed/xgmii link mode     [ 1655.036525] audit: type=1701 audit(1675966876.240:44): auid=4294967295 uid=0 gid=0 ses=4294967295 subj=kernel pid=1382 comm="python3" exe="/usr/bin/python3.7" sig=7 res=1

Jason
_______________________________________________
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com

Disregard, I neglected to do a full power cycle after updating the eMMC. After doing so, MPM did start up successfully. Sorry for the noise.

Jason
_______________________________________________
USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-le...@lists.ettus.com

Reply via email to