[USRP-users] UBX-160 LO filtering
Hi, On page 7 of the UBX-160 schematics https://files.ettus.com/schematics/ubx/UBX-160_revE.pdf there is a LO filter selection with three paths: - 800 MHz-2.2 GHz - passthrough (no filter) - lowpass 400 MHz..800 MHz Next to that, there is also a way to measure the LO on J3. Question: - Is this LO filter automatically selected, and where should I look for the rules ? - Is there some way to control this LO filter selection via UHD, similar to a tune request? - The IQ mixer ADL5371 documentation warns that the LO's 3rd harmonic needs to be well suppressed. Is there benefit to be gained by using an external, more filtered LO instead of the on-board UBX-160 LO plus its filters ? Mark-Jan ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
[USRP-users] Re: UBX-160 LO filtering
On Thu, Jan 26, 2023 at 05:14:34PM -0500, Marcus D. Leech wrote: > On 26/01/2023 16:33, Mark-Jan Bastian wrote: > > Hi, > > > > On page 7 of the UBX-160 schematics > > https://files.ettus.com/schematics/ubx/UBX-160_revE.pdf > > there is a LO filter selection with three paths: > > - 800 MHz-2.2 GHz > > - passthrough (no filter) > > - lowpass 400 MHz..800 MHz > > Next to that, there is also a way to measure the LO on J3. > > > > Question: > > - Is this LO filter automatically selected, and where should I look for the > > rules ? > > - Is there some way to control this LO filter selection via UHD, similar to > > a tune request? > > > > - The IQ mixer ADL5371 documentation warns that the LO's 3rd harmonic > > needs to be well suppressed. Is there benefit to be gained > > by using an external, more filtered LO instead of the on-board UBX-160 LO > > plus its filters ? > > > > Mark-Jan > > ___ > > USRP-users mailing list -- usrp-users@lists.ettus.com > > To unsubscribe send an email to usrp-users-le...@lists.ettus.com > Is this a "just curious" type of question, or are you seeing issues that you > believe may be due to the UBX > LO and Mixer architecture? When generating a complex +74 MHz and -73 MHz sinewave signal and adding them, I see 1 MHz spur about -30 dB @ -74 MHz, irrespective of LO frequency. Thus the spur might caused by (remaining) nonlinearity in the mixer or the subsequent stages. I would like to control the -30 dB spur to a lower value, or at least find out what the source of this spur is. Next thing could be the DAC, which supports rates up to 1600 MSPS, but is only used at 200 MSPS. Perhaps there is a way to modify the waveform in this step, similar in the way digital predistortion can help improve spectral output purity? Mark-Jan ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
[USRP-users] Re: UBX-160 LO filtering
On Fri, Jan 27, 2023 at 02:37:08PM -0500, Marcus D. Leech wrote: > On 27/01/2023 00:50, Mark-Jan Bastian wrote: > > > > When generating a complex +74 MHz and -73 MHz sinewave signal and adding > > them, > > I see 1 MHz spur about -30 dB @ -74 MHz, irrespective of LO frequency. Thus > > the > > spur might caused by (remaining) nonlinearity in the mixer or the > > subsequent stages. > > > > I would like to control the -30 dB spur to a lower value, or at least find > > out > > what the source of this spur is. > > Next thing could be the DAC, which supports rates up to 1600 MSPS, but is > > only used at > > 200 MSPS. Perhaps there is a way to modify the waveform in this step, > > similar in > > the way digital predistortion can help improve spectral output purity? > > > > Mark-Jan > > > > > Also, have you run these utilities: > > https://files.ettus.com/manual_archive/release_003_010_001_000/html/page_calibration.html I'm quite sure the problem I see is caused by IQ imbalance. I can simulate it with the respective gnuradio block, with near identical results- asymetric spur, not symetric around a generated signal, for a whopping 0.5 factor. I did run the calibration tools on my new system, for rx and tx chains, but will run them again (iterate). If that doesnt help I might modify the sourcesignal, and follow the signal path again. Mark-Jan ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
[USRP-users] Re: Thanks for a decade of professional fun! (Or: so long, and see you around!)
Thanks Marcus! Mark-Jan ___ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com
Re: [USRP-users] B210 EMI in GPS band
Hi Dave, You could try a shorter USB 3.0 cable, they can be as short as .2 meters. If running from a laptop, run the laptop from battery, reduces a radiating power cord and DC/DC charger/switch mode powersupplies from your setup. Use a second laptop with/or dedicated spectrum analyser and RF probes to get an idea of the sources and power levels, and (sub) harmonics of clocks involved in your cabling setup, and determine how to might enter one of the GNSS bands and how you are going to attenuate them. Mark-Jan On Sat, Jan 19, 2019 at 06:36:03PM +, d.des via USRP-users wrote: > We've experienced some GPS degradation with the B210 running USB3 that > was bad enough to kick us off a small test platform. The only way we've > found to reduce the EMI to acceptable levels was to use a USB2 cable to > force the controller down to the slower speed, and the bandwidth > reduction makes for a less-useful test. We did a little chamber testing > and putting the radio in a metal box didn't even help unless we put the > connected antenna in the box with it. Can anyone share experience or > ideas? I'm sorry if this has been covered on the list before, I follow > it sporadically and don't know of a good way to search the archives. > > Thanks, > > Dave > > > ___ > USRP-users mailing list > USRP-users@lists.ettus.com > http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com ___ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
Re: [USRP-users] Recording the full X3x0 bandwidth
Hi Paul, I can record from the X310 to disk to nvme x4 PCIe at 800 MB/sec for a few minutes. There is still a risk of O 's appearing. First thing to check is the number of PCIe lanes available to the disk controller and disks, and how many and which PCIe bridges are in between on your motherboard configuration. Try to avoid other traffic over these PCIe bridges. lspci -vt for a tree view. Then one can do benchmarking from DRAM to disk. Perhaps you will not need a filesystem for your very simple storage purpose. You can ultimately just boot from some other media (USB stick or CD-ROM loaded into a ramdisk) just to make sure there is absolute no need to read-access any other data on said disks, via cached pages or otherwise. Hickups by system management mode or other unexpected driver interrupt sources should be minimized. Other networking code and chatter might need be reduced, just as SMM related thermal management events in the BIOS. First tune everthing for maximum performance, then optimize for very constant write performance. Mark-Jan On Sat, Mar 09, 2019 at 12:32:05PM +0100, Paul Boven via USRP-users wrote: > Hi, > > I'm trying to record the full X310 bandwidth, for a few hours, without any > missed samples. Which of course is a bit of a challenge - does anyone here > already achieve this? > > We're using a TwinRX, so initially I wanted to record 2x 100MS/s (from both > channels), which amounts to 800MB/s, 6.4Gb/s. At first I tried uhd_rx_cfile, > but have been unable to get it to a good state without showing an 'O' every > few seconds at these speeds. > > As a recorder I have a SuperMicro 847 chassis with 36 disks (Seagate > Ironwolf 8TB T8000VN0022, 7200rpm). In this particular server, the disks are > connected through an 'expander' backplane, from a single HBA (LSI 3008). CPU > is dual Xeon 4110, 2.1 GHz, 64 GB of ram. > > At first I tried a 6 disk pool (raidz1), and eventually ended up creating a > huge 36 disk ZFS stripe, which in theory should have no trouble with the > throughput, but certainly kept dropping packets. > > Note that recording to /dev/shm/file works perfectly without dropping > packets, until the point that the memory is full. > > Given that ZFS has quite a bit of (good) overhead to safeguard your data, I > then switched to creating a mdadm raid-0 with 18 of the disks (Why not 36? I > was really running out of time!) > > At that point I also found 'specrec' from gr-analyze, which seems more > suitable. But, even after enlarging its circular buffer to the largest > supported values, it would only average a write speed of about 300MB/s. > > In the end I had to settle for recording at only 50MS/s (200MB/s) from only > a single channel, a far cry from the 2x 6.4Gb/s I'm ultimately looking to > record. Although I did get more than an hour of perfect data out of it, over > time the circular buffer did get fuller in bursts, and within 2 hours it > exited due to exhausting the buffers. Restarting the application made it > work like fresh again, with the same gradual decline > in performance. > > Specrec, even when tweaking its settings, doesn't really take advantage of > the large amount of memory in the server. As a next step, I'm thinking of > adapting specrec to use much larger buffers, so that writes are at least in > the range of MB to tens of MB. From earlier experiences, it is also > important to flush your data to disk often, so the interruptions due to this > are more frequent, but short enough to not cause receive buffers to > overflow. > > In terms of network tuning, all recording was done with MTU 9000, with wmem > and rmem at the recommended values. All recordings were done as interleaved > shorts. > > Does anyone have hints or experiences to share? > > Regards, Paul Boven. > > ___ > USRP-users mailing list > USRP-users@lists.ettus.com > http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com ___ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
Re: [USRP-users] Recording the full X3x0 bandwidth
Hi Joe, With sudo lspci -vvv you will see the capabilties, including the low-level PCIe bus speed and link count negotiation of the devices. The sudo is needed to get the low-level LnkCap and LnkCtl bits: For a 16-lane videocard on a laptop here, likely soldered right on the motherboard: The PCIe capabilities: 8 GT/sec, max 16 lanes: LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit Latency L0s <1us, L1 <4us ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+ What the speed ended up to be 8GT/sec, 16 lanes: LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR+, OBFF Via message DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR+, OBFF Disabled Below is a variant of the LnkCtl record, providing even more information on even the equalisation of the SERDES link that is used by this PCIe device: (equalisation is the analog RF signal processing to overcome losses while routing the signal over the motherboard and the connectors): LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Above could be regarded as a 'depth first' approach for those who would like to stay purely software-oriented. I like to treat a PC in such scenario as an embedded system, first get the powersupplies, clocks and other hardware right, design the clockdomains and datarates, then gradually move up the software/control/kernel/driver stack to verify for anomalies that could trigger such intermittent problems. Mark-Jan On Sat, Mar 09, 2019 at 10:40:39AM -0700, Joe Martin wrote: > Hi Mark, > > I am intrigued by your response and have obtained a tree view for my system > as you suggested to Paul. I???m unfamiliar with the tree view and don???t > understand how to check the number of PCIe lanes that are available to the > disk controller and disks and how to check how many PCIe bridges are in > between on my motherboard configuration. > > I have a screenshot of the tree view showing my 10G ethernet connection (but > it is 220KB in size so I didn???t attach it here) but I am not familiar with > how to determine what you asked about from the tree and what to do about the > configuration in any case. Is the configuration fixed and not changeable, in > any case? > > If so, then perhaps your alternative suggestion regarding booting from a USB > stick into a ramdisk is a viable route? I???m unfortunately not familiar > with the details of how to do that so perhaps a couple of brief comments > about implementing that process would help me understand better if that???s > the only viable alternative to pursue given the present hardware > configuration? > > Joe > > > On Mar 9, 2019, at 5:14 AM, Mark-Jan Bastian via USRP-users > > wrote: > > > > Hi Paul, > > > > I can record from the X310 to disk to nvme x4 PCIe at 800 MB/sec > > for a few minutes. There is still a risk of O 's appearing. > > > > First thing to check is the number of PCIe lanes available to the disk > > controller and disks, and how many and which PCIe bridges are in between > > on your motherboard configuration. Try to avoid other traffic over these > > PCIe bridges. lspci -vt for a tree view. > > > > Then one can do benchmarking from DRAM to disk. Perhaps you will not need > > a filesystem for your very simple storage purpose. > > You can ultimately just boot from some other media (USB stick or CD-ROM > > loaded into a ramdisk) just to make sure there is absolute no need to > > read-access any other data on said disks, via cached pages or otherwise. > > > > Hickups by system management mode or other unexpected driver interrupt > > sources > > should be minimized. Other networking code and chatter might need be > > reduced, > > just as SMM related thermal management events in the BIOS. > > First tune everthing for maximum performance, then optimize for very > > constant > > write performance. > > > > Mark-Jan > > > > On Sat, Mar 09, 2019 at 12:32:05PM +0100, Paul Boven via USRP-users wrote: > >> Hi, > >> > >> I'm trying to
Re: [USRP-users] [EXTERNAL] Re: Recording the full X3x0 bandwidth
On Mon, Mar 11, 2019 at 08:40:49PM +, Minutolo, Lorenzo (389I) wrote: > Hi All, > > We're using the USRP x3x0 for cosmology and many other applications: we > use them to read out our cryogenics detectors. > > Do you need the full spectral bandwidth of the device? Things get much > easier if you decimate the signal before storing it to disk. The system > we realized uses a GPU to decimate USRPs data streams before saving them to > disk. While this looks like a workaround, you will lose information when decimating a signal to reduce its datarate, at least half of it, so you would need more X300/X310's and have them to phase-coherently record their inputs to disk. If there is some throtteling over time introduced when excersising a continuous disk IO bandwidth (via thermal events on PCIe controllers/bridges or otherwise), but the networking to GPU bandwidth path is fine, then perhaps try to store data from DRAM away to it via the GPU's PCIe controller. The seperate GPU is likely connected to the PCIe interconnect directly attached to the CPU, not indirectly via DMI to the chipset, which also can present PCIe interconnects originating from the same PCIe root complex. In the past there have been videocards that host a 16x16x16 PCIe bridge (ie. IDT, or broadcom) to accomodate two 16-lane PCIe GPUs, each with their own GDDR5 memory. There also have been videocards that embedded a direct path storage functionality to maximaze GPU-NANDflash-based IO rates, although I havn't seen ones which use a lot of SATA or other spinning media compatible storage. Using a storage controller directly on the CPU-attached PCIe slot, and introduce your own PCIe bridge that does nothing than just monitor the available bandwidth over time might be another try, using a riser card that features a IDT/broadcom PCIe bridge. National instruments, the current parent company of Ettus, also has PXI modules dedicated to RF recording. Perhaps this is also something to explore. Mark-Jan > Check it out (https://arxiv.org/abs/1812.02200) , it should be opensource > soon. > > [1812.02200] A flexible GPU-accelerated radio-frequency readout for > superconducting detectors<https://arxiv.org/abs/1812.02200> > arxiv.org > We have developed a flexible radio-frequency readout system suitable for a > variety of superconducting detectors commonly used > in millimeter and submillimeter astrophysics, including Kinetic Inductance > detectors (KIDs), Thermal KID bolometers (TKIDs), > and Quantum Capacitance Detectors (QCDs). Our system avoids custom FPGA-based > readouts and instead uses commercially available > software radio hardware for ADC/DAC and a GPU to handle real time signal > processing. Because this system is written in common > C++/CUDA, the range of different algorithms that can be quickly > implemented make it suitable for the readout of many > others cryogenic detectors and for the testing of different and possibly more > effective data acquisition schemes. > > > > Lorenzo > > ________ > From: USRP-users on behalf of Joe Martin > via USRP-users > Sent: Saturday, March 9, 2019 10:23:28 AM > To: Mark-Jan Bastian > Cc: usrp-users@lists.ettus.com > Subject: [EXTERNAL] Re: [USRP-users] Recording the full X3x0 bandwidth > > Thank you Mark-Jan for the additional information. I?ll study it and compare > with my system. Much appreciated. > > Best regards, > > Joe > > > On Mar 9, 2019, at 11:19 AM, Mark-Jan Bastian wrote: > > > > Hi Joe, > > > > With sudo lspci -vvv you will see the capabilties, including the low-level > > PCIe bus speed and link count negotiation of the devices. The sudo is needed > > to get the low-level LnkCap and LnkCtl bits: > > > > For a 16-lane videocard on a laptop here, likely soldered right on the > > motherboard: > > > > The PCIe capabilities: 8 GT/sec, max 16 lanes: > >LnkCap: Port #0, Speed 8GT/s, Width x16, ASPM L0s L1, Exit > > Latency L0s <1us, L1 <4us > >ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+ > > What the speed ended up to be 8GT/sec, 16 lanes: > >LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ > >ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- > >LnkSta: Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ > > DLActive- BWMgmt- ABWMgmt- > >DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR+, > > OBFF Via message > >DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, > > LTR+, OBFF Disabled > > Below is a variant of the LnkCtl record, providing even more
Re: [USRP-users] [EXTERNAL] Re: Recording the full X3x0 bandwidth
On Tue, Mar 12, 2019 at 09:10:02AM +0100, Mark-Jan Bastian wrote: > In the past there have been videocards that host a 16x16x16 PCIe bridge (ie. > IDT, > or broadcom) to accomodate two 16-lane PCIe GPUs, each with their own > GDDR5 memory. Note that there is also Microsemi (a microchip company) that markets PCIe bridges dedicated to storage solutions. This seems a bit odd, since PCIe transports just a lot of small packets with memory transactions, and atomic memory transactions and interrupts. > There also have been videocards that embedded a direct path storage > functionality to maximaze GPU-NANDflash-based IO rates, although I havn't > seen ones which use > a lot of SATA or other spinning media compatible storage. Example is AMD Radeon Pro SSG 2TB (from 2016 ?), which uses 4 onboard NVME slots each with a 512 GB Samsing SM961 using presumably 4 PCIe lanes each. It support OpenCL and has a special API to access it's storage. https://www.amd.com/Documents/ssg-api-user-manual.pdf Today also 2TB M.2 SSDs are available, Samsung 860 EVO 2TB, times 4 would mean up to 8 TB of GPU attached storage (if compatible). This should translate for up to 20 minutes of 1600 MB/sec storage. Mark-Jan ___ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
Re: [USRP-users] Plagued by sporadic TX underruns on some systems, B200
Hi Mathhias, On Thu, May 02, 2019 at 02:18:38PM +0200, Matthias Br??ndli via USRP-users wrote: > On some systems, we observe sporadic underruns. They occur in irregular > intervals, something like once a minute, sometimes rarer. Seen with both > USB2 and USB3 hosts, over several UHD versions. Until now we've mostly > been able to avoid the issue by selecting machines that don't show the > issue, but we hope we can find a mitigation in software. The USB bus is managed using a USB hostcontroller that uses busmastering/ scatter/gather over PCI/PCIe to retrieve and store it's data. The USB 2.0 EHCI hostcontroller(s) are more straightforward than the XHCI hostcontroller used by USB 3, that has to deal with more complexity. If 480 mbps is more than enough for your stream (2048 MSPS ~ 8 MB/sec over a USB 2.0 bulk endpoint, using transactions of two times 512 bytes every 125 usec (8000 Hz USB framerate) could satisfy your bandwidth demands. You could even see these frames on a moderatly fast scope (500+ MHz), using a low-capacitance FET/active/differential probe and trigger on too much or a lack of them. Another option is using a powered USB 2.0 or USB 3.0 hub. Since USB hubs do not receive firmware updates, they should be very stable. If there is a problem with a USB enumuration timing/reset signalling or on the 5 gbps lanes, a hub might resolve an incompatibily in of one of the USB peers. For more in-depth USB debugging, there are external USB 2.0 and 3.x hardware bus analysers available, for example from the swiss company ellisys.com. These can log all USB traffic to a PC and filter the data, without interfering on the timing of the signals - ideal tools for USB device development and debug compatibility issues. For more in-depth PCIe transaction debugging of the USB host controllers, serialtek.com sells also PCIe 'AIC' hardware connector interposer cards that you can put between your motherboard and an external PCIe xHCI based USB hostcontroller. > Both the data source the modulator connects to and the USRP have a > disciplined time reference, implying there is no rate drift (plus, that > would trigger regular underruns, not sporadic ones). Some things can be very subtle, for example this (quite amazing) 2013 ethernet PHY bug: https://www.theregister.co.uk/2013/02/06/packet_of_death_intel_ethernet/ Very tough engineering by the affacted customer, who made it reproducable and fixed by a simple EEprom update from the vendor. Pure theory: What if this intermittent issue would be an issue with a certain sequence of packetlengths, some of them on the boundary of the maximum size for that endpoint and endpointtype, that are not handled properly at one or both sides, causing a glitch/retransmission/faillure, resulting in the application-level visible underruns ? How would you measure that ? How could you optimize the packetlengths so that the issue is quicker to reproduce ? Or avoid the issue by anticipating and avoiding such packetsequences ? > Could CPU frequency scaling lead to interruptions? It would leave that off - less interference with thermal peak management or other system management mode interrupts from your motherboard, most constant performance in outputing your samples. > [4] > https://files.ettus.com/manual/page_transport.html#transport_usb > > Default send_frame_size seems to be on line 85 of > https://files.ettus.com/manual/b200__impl_8hpp_source.html Best regards, Mark-Jan ___ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
Re: [USRP-users] Plagued by sporadic TX underruns on some systems, B200
On Mon, May 06, 2019 at 09:36:34AM +0200, Matthias Br??ndli wrote: > > Pure theory: What if this intermittent issue would be an issue with a > > certain sequence of packetlengths, some of them on the boundary of the > > maximum size for that endpoint and endpointtype, that are not handled > > properly at one or both sides, causing a glitch/retransmission/faillure, > > resulting in the application-level visible underruns ? > > How would you measure that ? How could you optimize the packetlengths > > so that the issue is quicker to reproduce ? Or avoid the issue by > > anticipating and avoiding such packetsequences ? > > It's a possible hypothesis, but I don't see how to test it easily. The ellisys protocol analyser supports an external trigger, and does support recording the USB enumeration, then pause and record into a circular buffer for indefinite time. Then the trick is to wait and detect your underrun, in your UHD apllication, and send a charachter to a usb serialport conncted via another usb hostcontroller/root port, to this coaxial trigger input. Configure the trigger to stop capturing, and analyse the USB traffic around the underrun. This timestamping is not perfect, but you should be in a few 10s msec of the underrun. If there is a USB bulk endpoint underrun visible in the ellisys log, other ways of filling that endpoint in time could be tried, in hostcontroller driver, libusb, uhd, or your application. Scheduling and prioritisation of all software components can be double-checked. Mark-Jan ___ USRP-users mailing list USRP-users@lists.ettus.com http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com