Hello,

I'm developing a UHD C++ application for testing the limits of the scheduling 
granularity I can achieve using an X310 on my host system. I'm running a Linux 
box with dual Xeon gold processors and a 10 gigabit ethernet interface to my 
X310. The X310 is clocked to 100 MSPS.


The premise is to simulate a packet arriving and telling the USRP to transmit 
or receive in the near future. Each packet has a reception/transmission time, 
and some switching time. During the switching time the USRP does not need to 
receive or transmit (it can, but the data doesn't matter). I assume packets 
arrive in batches, and are all scheduled as fast as possible once the batch 
arrives. After each batch is a long delay where I spin-lock to simulate waiting 
for the next batch to arrive.


On the transmit side, I've shown that given an 80% duty cycle tx/switching, and 
a batch of ten packets, I can schedule transmissions continuously without 
problem on the order of 10s of microseconds (given on the order of milliseconds 
initial latency between when the first batch arrives and the first batch is 
transmitted). I want to replicate this level of granularity on the receive side 
as well, but so far have been unsuccessful. I have however set the USRP to 
streaming mode continuous and proven my host can receive continuously in 
packets of the same size as my fine grained scheduling and keep up without 
error.


I have two main approaches I've tried, scheduling a bunch of packets in advance 
and calling recv and issue_stream_cmd alternately in a  single threaded loop, 
and creating separate loops for issue_stream_cmd and recv. For the 
multithreaded approach, I've gotten to a point where the first few thousand 
packets are scheduled properly, but after that it falls behind. The recv thread 
appears to keep up without issue, but I get late command errors back, implying 
the issue_stream_cmd thread is the problem. I've listed some example code 
below. Any advice on how to improve my issue_stream_cmd latency would be 
greatly appreciated!


Thanks,

Richard


// Two threads: send a single command the size of a full batch then pause in a
// loop; receive continuously (samples and done mode, atomic var to account for 
max number
// of commands in each thread) See also: rxtest_cmds
double rxtest_datas(USRPTypes &usrpType)
{
    uhd::set_thread_priority_safe(THREAD_PRIORITY);

    // Atomic variable to synchronize start of recv() calls with start of
    // issue_stream_cmd() calls
    global_start = false;
    // Ensure we don't exceed the cmd FIFO depth (defined in USRP_constants)
    global_cmd_num = 0;

    // Struct with all the variables used in various rx tests
    rxtest_vars rx(usrpType);

    // Receive a full batch at a time
    rx.packets_to_recv = rx.packets_to_recv / rx.batch_size;

    // Launch thread to schedule USRP receives
    std::thread cmd_thread(rxtest_cmds, rx);

    // Buffer to receive into (include full batch in one buffer)
    std::vector<std::complex<int16_t>> buff(RX_BATCH_SAMPLES);
    int expected_samps = buff.size();

    // timeout used by recv function to collect samples
    double timeout = rx.initial_timeout;

    // Don't start until the other thread has posted its first scheduled receive
    while (global_start == false)
    {
        // Spin while we wait to start.
    }

    for (int i = 0; i < rx.packets_to_recv; i++)
    {
        // Use shorter timeout if this isn't the first packet
        if (i > 0)
        {
            timeout = rx.batch_latency;
        }
        // Collect received samples from the USRP
        rx.sample_count = rx.streamer->recv(&buff.front(), buff.size(), rx.md, 
timeout);

        // Post that we took a command off the USRP
        if (global_cmd_num >= 1)
        {
            global_cmd_num--;
        }

        // Did we drop any samples on this slot?
        if (rx.sample_count != expected_samps)
        {
            rx.slot_drops++;
        }

        // Account for the errors of this slot
        if (rx.md.error_code != uhd::rx_metadata_t::ERROR_CODE_NONE)
        {
            rx.errors++;
            rx.estruct.incrementRxErrorCount(rx.md.error_code);
        }

#if COLLECT_MD
        rx.mdvec[i] = rx.md;
#endif
    } // end data collection

    printMD(rx, true);
    cmd_thread.join();
    return rx.batch_latency;
}

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to