Is there any thing I can check to resolve this issue?
Vipin
Sent from Mail for Windows 10
From: Vipin Sharma
Sent: Saturday, July 29, 2017 10:13 PM
To: GNURadio Discussion List
Subject: ValueError: itemsize mismatch crash
Hi,
I have a few custom blocks in C in the top level GRC system. Everyt
Is that a new symptom, or have the UHD Warning:... been there from the
beginning?
I'd like to ask you to copy&paste (not screenshot) the full console
output of running your flowgraph on your machine.
Thanks!
Marcus
On 08/02/2017 08:18 PM, Rui ZOU wrote:
> Most of the times, the transmission is
Most of the times, the transmission is stable, but there are 3 times bad
things pop up. 'L' is shown for one time after running for some time. The
following warning is shown twice.
UHD Warning:
x300_dac_ctrl: front-end sync failed. unexpected FIFO depth [0x7]
thread[thread-per-block[5]: ]: Run
I'd consider that good news, because that definitely means that your PC
is up to the task of supplying samples fast enough :)
Still, we're getting "L"s. So let's reduce the test case: Same USRP Sink
as you use here, but with a Null Source directly feeding it. Is that stable?
While we're at it
I'm really confused at this point. In no point in your testing should be
Throttle involved. So, please, can you do a test with:
Null Source
Probe Rate -> Message Debug
No UHD USRP Sink
No Throttle
and tell me a) how fast you were and b) how much CPU you used ?
Thanks!
Marcus
On 08/02/2017 05:1
Throttle block is NEVER in use when USRP Sink is used.
On Wed, Aug 2, 2017 at 11:08 AM, Rui ZOU
wrote:
> USRP sink
>
> On Wed, Aug 2, 2017 at 11:08 AM, Rui ZOU
> wrote:
>
>> My previous email shows the rate WITHOUT
>>
>> On Wed, Aug 2, 2017 at 11:05 AM, Marcus Müller wrote:
>>
>>> WAIT! Thrott
USRP sink
On Wed, Aug 2, 2017 at 11:08 AM, Rui ZOU
wrote:
> My previous email shows the rate WITHOUT
>
> On Wed, Aug 2, 2017 at 11:05 AM, Marcus Müller wrote:
>
>> WAIT! Throttle? I didn't see that in either of the flow graphs you sent
>> me first (twoparatx, onefile2tx)
>>
>> Seriously?! Your
My previous email shows the rate WITHOUT
On Wed, Aug 2, 2017 at 11:05 AM, Marcus Müller wrote:
> WAIT! Throttle? I didn't see that in either of the flow graphs you sent me
> first (twoparatx, onefile2tx)
>
> Seriously?! Your GRC will even print a warning that you mustn't use
> Throttle together
WAIT! Throttle? I didn't see that in either of the flow graphs you sent
me first (twoparatx, onefile2tx)
Seriously?! Your GRC will even print a warning that you mustn't use
Throttle together with hardware if you have both Throttle and a USRP sink.
Remove the Throttle, and try again.
On 08/02/2
Changed to null source, the rate is still around twice the sample rate
(390.625k) for throttle block.
*** MESSAGE DEBUG PRINT
(((rate_now . 781360) (rate_avg . 786529)))
When the throttle block is bypassed, the rate jumps up to around 11.3MS/s.
*
Ok, there's something fishy here. That rate (without the USRP Sink) is
ridiculously low. Can you replace the file_source with a null_source?
That way, we can rule out storage as the bottleneck.
The probe_rate does nothing but just count how many items fly by, and
then send a message at its output
Not sure if the debug setup is the expected since it's the first time I use
the 'Probe Rate' and 'Message Debug' blocks whose functions are not very
clear to me now just after reading the contents under the document tag. If
there are other ways to learn about new blocks, please advise.
The rates I
Hi,
I've been having some difficulty getting reliable data flow from my USRP
X310 with a GRC flowgraph, so I'm trying out writing my system in C++ with
the UHD driver API. My first step has been to retrieve samples from the
X310, forward them to a UDP port and then pick them up with a GRC Socket
P
Huh, I really don't know what's happening there :/ I sadly don't have
the USRP to test this live with me right now, but there's absolutely no
timed commands involved¹
So, trying to weed out bugs:
* I've replaced the USRP sink with a "Probe Rate" block, connected to a
"Message Debug"'s print p
14 matches
Mail list logo