Hi all,

We are using a X310 to generate a continuous (last non-stop for days
or even months) and controllable stream of output. I have some
question about synchronizing the output with an external/physical time
and in order to avoid the XY problem let me describe what we want to
achieve as well as the constraints before talking about a specific
problem I want to solve in the current implementation. Suggestions on
an different approach is welcome.

The software can receive external (timed) command and will need to
change the output at a particular time with sub-ms precision. We first
tried to use the back pressure from `tx_stream::send` but it seems
that the output is buffers either in the software or in the hardware
and the new data we push will only show up in the output hundreds of
ms later, which is definitely not good enough for us. Understandably,
this backpressure isn't very repeatable (not to ms) so we can't just
delay things by the backpressure either (plus the hundreds of ms delay
is too much for our rep-rate).

To solve this problem, we tried to assign (virtual) time stamp to each
data we push, by initializing the time on the board with
`set_time_unknown_pps`, a timed begin of burst and continuously
correct for timing drift by calling `get_time_now`, we can assign a
very accurate (~us) timing to each output data and use that to
throttle the data ourselves. This way, we should be able to achieve a
latency of few ms and an timing accuracy of few us. (This claim is not
yet tested so feel free to point out if it doesn't make sense.)

However, the approach above only works if we can predict the usrp time
(from `get_time_now`) a particular data will be outputted. It works if
the output is truly continuous but it is very hard to avoid infrequent
buffer underrun which will inevitably delay the time a data point will
be outputted. It should happen very infrequently (maybe a few times
every minutes) but since we'd like to run for a very long time, it
could be an issue when the error accumulates.

A few ways I can think of to solve this, each has it's own problem,

1. Periodically stop the burst and start a new timed burst with a
small time gap in the middle. If we do this frequent enough we should
be able to consume away the missing time. There are some point in the
continuous run where it is kind of ok to do this but we'd still like
to avoid doing this as much as possible since it can make the whole
system less predictable.

2. Obtain the number of outputted/missing samples at current time.
This will essentially remove the need to rely on assuming the output
is always continuous. However, I can't find a function to do this.
Modifying fpga image and adding a user register for this can be an
acceptable solution and a pointer to roughly how that can be done
would be very helpful.

Thanks,

Yichao

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to