Marcus,
I really do not understand what you are trying to demonstrate i have started long ago trying to use the easy GR blocks withe the B200mini and found out at once that the overhead introduced be the Gr blocks were  limiting the efficiency so you discovered the "warm water" as we say in Italy , i was just going to waste some more of my time following your indications but to be frank i am tired of reporting problems and receiving back questions instead of answers, considerations more philosophic than technical . The point is that i bought  from Ettus a device which was promising and publishing certain level of performances. I have to admit that all was true up  to about a year ago , not anymore now why? What should i do to be able  to see my expectations satisfied? If a USB Linux based system is not able to sustain your products what type of consideration you think we are forced to have?
nando

On 8/29/2020 20:51, Marcus D. Leech wrote:
On 08/29/2020 03:35 AM, Nando Pellegrini wrote:
Marcus,
Attached you can find the results of the benchmark test.
I have been also compared the behavior with 2 different CPU and different USB type 3.0 for the older tower PC, USB 3.1 on the laptop, very strange the case of the older CPU generating an overflow every minute. The conditions were exactly the same in all test with no other visible activity on the machines. Release 14.0 seems a bit better with the benchmark but,sadly, the 2 UHD release are not comparable because the 14.0 as soon as generates an overflow indication drops in the timeout with no recovery but final consideration is that fast sample rate became unusable for long signal recording regardless to software release and PC.
I really hope for a solution.
nando
I played a bit with a B210 on a Fedora 31 system today, and was unable to achieve greater than 37MSPS without overruns.

I constructed a "degenerate-case" Gnu Radio flow graph that was just:

uhd-source-->null-sink

That's roughly equivalent to what benchmark_rate does, and I was forced to do that since F31 doesn't appear to package tools
  like benchmark_rate and some of the other UHD examples.

This was with UHD 3.14.0.1

The system was an AMD Phenom II X6 1090T.

What i noticed was that above 38Msps, you'd get continuous over-runs, and at 38Msps, you'd get a burst of overruns whenever you switched to   a new window.  This is CLEARLY a system effect, unrelated to UHD at all.  Likely contention for memory access, interrupt latency, or PCI-e   transaction contention.  The CPU consumption for the gr-uhd thread that was servicing the USB interface never rose above 38% CPU.   Now the UHD transport code is single-threaded.  It's tempting to suggest "why not make it multi-threaded?".  That was tried, several times,   a few years back, and performance was *worse* with UHD transport spread over multiple threads.   Probably due to resource contention
  at the kernel interface.

I'll note that no matter whether I specified sc8, sc12, or sc16 sample sizes I saw the same behavior.  This indicates to me that it isn't   USB *bandwidth* so much as USB (and by implication PCIe) offered *TRANSACTION* load.    It is likely the case that different USB3 controllers   make this better/worse, depending on their interrupt behavior, how they do DMA, etc, etc.  I did have to use num_recv_frames > 200 to
  achieve even this.

I'll make a general comment that achieving loss-free, "real time", high-bandwidth streaming using a general-purpose operating system is   always going to require a lot of tuning, and not a small amount of good luck.    Other applications of high-speed streaming are somewhat   tolerant if one end does a "hey stop sending for a bit" -- like disk drives and network interfaces, etc.   But when you're trying to sample   the "real world", you cannot reasonably put it "on hold" while you "catch up".  Which is why throwing more buffering at this problem generally   doesn't work that well.  If the offered load exceeds long-term capacity, even by a tiny bit, you will end up "losing".    It is clear that "capacity"   is only loosely-coupled to CPU performance, and much-better represented by overall *system* performance.

Over the years, folks have pointed at UHD, hoping that some kind of performance-tuning exercise within UHD will get them the performance   their application requires.  UHD has been, over the years (roughly 10 years at this point) optimized quite a bit.  But UHD lives within an overall
  *system* and it can only do as well as that *system* can provide.






_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to