[USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-06 Thread Steven Knudsen via USRP-users
Hi All,

I am building a TDMA system based on the B200mini. TX is out the TRX port and 
RX is on RX2.

I schedule both TX and RX using stream commands. Tested separately, they work 
great. However, when I have them running together (separate threads for send 
and recv calls), the transmit waveform is attenuated about 70%. Since I’ve not 
yet got the demodulation set up, I am not sure if it’s just attenuation or 
maybe something else. The transmitted packet length looks to be okay.

I thought that maybe the transmit and receive windows are maybe too close in 
time, so separated them by a millisecond; no change.

I suspect I am seeing some kind of interference between the TX and RX 
streamers. Maybe I need to issue some command at the end of RX? 

Any suggestions on where I’m going wrong?

Thanks,

steven


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
erreichen. - Franz Kafka

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-06 Thread Steven Knudsen via USRP-users
Thanks, Marcus.

I was running a slightly older version, so updated to release_003_010_002_000. 

Now it does not get as far at all, but seg faults :-(

Backtrace looks like

Thread 18 "TxRxFlood" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe8ff9700 (LWP 7602)]
0x7731fb96 in 
__convert_sc16_item32_le_1_fc32_1_PRIORITY_SIMD::operator()(uhd::ref_vector const&, uhd::ref_vector const&, unsigned long) () from 
/usr/local/lib/libuhd.so.003
(gdb) bt
#0  0x7731fb96 in 
__convert_sc16_item32_le_1_fc32_1_PRIORITY_SIMD::operator()(uhd::ref_vector const&, uhd::ref_vector const&, unsigned long) () from 
/usr/local/lib/libuhd.so.003
#1  0x7768a361 in 
uhd::transport::sph::recv_packet_handler::convert_to_out_buff(unsigned long) ()
   from /usr/local/lib/libuhd.so.003
#2  0x77695d1f in 
uhd::transport::sph::recv_packet_streamer::recv(uhd::ref_vector const&, 
unsigned long, uhd::rx_metadata_t&, double, bool) () from 
/usr/local/lib/libuhd.so.003
#3  0x004841e0 in uhd::jitc::liquidDspPhy::receiveLoop (this=0x754650)
at /home/knud/Development/uhd-jitc/lib/phy_layer/liquidDspPhy.cpp:273
#4  0x0048a543 in boost::_mfi::mf0::operator() (this=0x7790c8, p=0x754650)
at /usr/include/boost/bind/mem_fn_template.hpp:49
#5  0x0048a4c4 in 
boost::_bi::list1 
>::operator(), 
boost::_bi::list0> (this=0x7790d8, f=..., a=...) at 
/usr/include/boost/bind/bind.hpp:253
#6  0x0048a45c in boost::_bi::bind_t, 
boost::_bi::list1 > >::operator() 
(this=0x7790c8) at /usr/include/boost/bind/bind_template.hpp:20
#7  0x0048a40e in boost::detail::thread_data, 
boost::_bi::list1 > > >::run 
(this=0x778f10)
at /usr/include/boost/thread/detail/thread.hpp:116
#8  0x767305d5 in ?? () from 
/usr/lib/x86_64-linux-gnu/libboost_thread.so.1.58.0
#9  0x765096ba in start_thread (arg=0x7fffe8ff9700) at 
pthread_create.c:333
#10 0x753b93dd in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:109


Looked at the source for the __convert_sc16_item32_le_… and can’t see the issue 
easily. Obviously originates at my call to recv at #2, which is invoked after 
the first scheduled receive. I assume that recv starts getting samples at the 
scheduled time and a wire sample conversion goes awry. 

Tried a different program that does only scheduled receives and it still works 
properly, so it must be something in the new program, but not UHD-related.

I’ll keep looking…


thanks,

steven


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

An Fortschritt glauben heißt nicht glauben, daß ein Fortschritt schon geschehen 
ist. Das wäre kein Glauben. - Franz Kafka
Post hoc ergo propter hoc.

> On Sep 6, 2017, at 18:37, Marcus D. Leech via USRP-users 
>  wrote:
> 
> On 09/06/2017 08:15 PM, Steven Knudsen via USRP-users wrote:
>> Hi All,
>> 
>> I am building a TDMA system based on the B200mini. TX is out the TRX port 
>> and RX is on RX2.
>> 
>> I schedule both TX and RX using stream commands. Tested separately, they 
>> work great. However, when I have them running together (separate threads for 
>> send and recv calls), the transmit waveform is attenuated about 70%. Since 
>> I’ve not yet got the demodulation set up, I am not sure if it’s just 
>> attenuation or maybe something else. The transmitted packet length looks to 
>> be okay.
>> 
>> I thought that maybe the transmit and receive windows are maybe too close in 
>> time, so separated them by a millisecond; no change.
>> 
>> I suspect I am seeing some kind of interference between the TX and RX 
>> streamers. Maybe I need to issue some command at the end of RX? 
>> 
>> Any suggestions on where I’m going wrong?
>> 
>> Thanks,
>> 
>> steven
>> 
>> 
> Are you running the latest UHD?
> 
> I vaguely recall some switching issues on this platform with an early code 
> release.  But I could be mis-remembering.
> 
> 
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-08 Thread Steven Knudsen via USRP-users
 a different program that does only scheduled receives and it still 
>> works properly, so it must be something in the new program, but not 
>> UHD-related.
>> 
>> I’ll keep looking…
>> 
>> 
>> thanks,
>> 
>> steven
>> 
>> 
>> Steven Knudsen, Ph.D., P.Eng.
>> www. techconficio.ca <http://techconficio.ca/>
>> www.linkedin.com/in/knudstevenknudsen 
>> <http://www.linkedin.com/in/knudstevenknudsen>
>> 
>> An Fortschritt glauben heißt nicht glauben, daß ein Fortschritt schon 
>> geschehen ist. Das wäre kein Glauben. - Franz Kafka
>> Post hoc ergo propter hoc.
>> 
>>> On Sep 6, 2017, at 18:37, Marcus D. Leech via USRP-users 
>>> mailto:usrp-users@lists.ettus.com>> wrote:
>>> 
>>> On 09/06/2017 08:15 PM, Steven Knudsen via USRP-users wrote:
>>>> Hi All,
>>>> 
>>>> I am building a TDMA system based on the B200mini. TX is out the TRX port 
>>>> and RX is on RX2.
>>>> 
>>>> I schedule both TX and RX using stream commands. Tested separately, they 
>>>> work great. However, when I have them running together (separate threads 
>>>> for send and recv calls), the transmit waveform is attenuated about 70%. 
>>>> Since I’ve not yet got the demodulation set up, I am not sure if it’s just 
>>>> attenuation or maybe something else. The transmitted packet length looks 
>>>> to be okay.
>>>> 
>>>> I thought that maybe the transmit and receive windows are maybe too close 
>>>> in time, so separated them by a millisecond; no change.
>>>> 
>>>> I suspect I am seeing some kind of interference between the TX and RX 
>>>> streamers. Maybe I need to issue some command at the end of RX? 
>>>> 
>>>> Any suggestions on where I’m going wrong?
>>>> 
>>>> Thanks,
>>>> 
>>>> steven
>>>> 
>>>> 
>>> Are you running the latest UHD?
>>> 
>>> I vaguely recall some switching issues on this platform with an early code 
>>> release.  But I could be mis-remembering.
>>> 
>>> 
>>> ___
>>> USRP-users mailing list
>>> USRP-users@lists.ettus.com <mailto:USRP-users@lists.ettus.com>
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com 
>>> <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>> 
> 

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-08 Thread Steven Knudsen via USRP-users
Let me see if I understand you correctly. Are you saying that turning on the 
first reception takes significant time?

In my case, the receive loop is running before the scheduling starts; it just 
times out and spins around again. So I am not sure that the receive startup is 
the problem.

But if it was, then could I simply not do a “dummy” receive and then start 
scheduling?

I can live with the 350 us pre-receive dead time.

However, without digging into the FPGA code, I would have thought that a 
command queue would be periodically checked to see if the next command time has 
arrived and then the waiting command would execute right away. But, if there 
was a set number of events that needed to execute for a reception, that might 
explain things. E.g., if things were disabled after NUM_SAMPS_AND_DONE and 
needed to be re-enabled the next time receive is invoked, that could and would 
take time.

At this point I am just speculating…

steven

PS You work long hours!


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Der entscheidende Augenblick der menschlichen Entwicklung ist immerwährend. 
Darum sind die revolutionären geistigen Bewegungen, welche alles Frühere für 
nichtig erklären, im Recht, denn es ist noch nichts geschehen. - Franz Kafka

> On Sep 8, 2017, at 21:30, Marcus D. Leech  wrote:
> 
> On 09/08/2017 10:54 PM, Steven Knudsen wrote:
>> Hi Marcus,
>> 
>> I did rebuild LiquidDSP, though it has no dependencies on the UHD. No change 
>> in behaviour.
>> 
>> Rather than speculate further with you (and waste your time), I pressed 
>> ahead with various avenues of investigation and appear to have had some 
>> success.
>> 
>> 
>> What seems to be the problem is that when you schedule multiple consecutive 
>> receptions using a stream command, you must leave some minimum time between 
>> the end of one reception and the start of the next.
>> 
>> Mine is a TDMA radio design with (for this experiment) 4 consecutive slots 
>> followed by an extended guard time. Each radio in the system is assigned one 
>> slot for transmission, and can use 3 slots for reception.
>> 
>> What I do now is the following
>> 
>> Every 10 ms;
>> 1) in one thread schedule 3 receptions 10 ms into the future, each separated 
>> by 1 ms and that will acquire samples over N microseconds (us) where N <= 
>> 1000 us. That is, do a batch scheduling for the reception of 3 slots
>> 2) A second thread runs forever and invokes recv with a timeout = 100 ms
>> 
>> As you know, the recv will block until it times out or a scheduled reception 
>> starts.
>> 
>> If N < 650 us, it all works as expected. However, if N > 650 us, I start to 
>> miss slots, and if N is too large a seg fault occurs (as per below, in 
>> __convert_sc16_item32_le_1_fc32_1_PRIORITY_SIMD).
>> 
>> My tentative conclusion is that there is some minimum time needed between 
>> scheduled receive commands even if they have been scheduled as a batch (all 
>> at once) in the “distant” past.
>> 
>> What do you think? Is there a minimum time in the FW required to set up for 
>> a scheduled reception?
>> 
>> thanks
>> 
>> steven
>> 
>> 
> I know that tuning on the B2xx series is quite time consuming, and there will 
> necessarily be *some* setup time for scheduled receives.  But I can't 
> definitively
>   comment on the magnitude of such setup times.
> 
> 
> 

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-08 Thread Steven Knudsen via USRP-users
Okay, I can accept that.

What it means is that since my customer has not settled on a USRP platform, I 
should put some checks in that enforce timing constraints. Stuff like looking 
at the number of samples for the max expected packet length times the sample 
rate and comparing to a max allowable packet time. Of course, I have some such 
checks already, but I guess not enough… Too bad…

thanks,

steven


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Der entscheidende Augenblick der menschlichen Entwicklung ist immerwährend. 
Darum sind die revolutionären geistigen Bewegungen, welche alles Frühere für 
nichtig erklären, im Recht, denn es ist noch nichts geschehen. - Franz Kafka

> On Sep 8, 2017, at 21:47, Marcus D. Leech  wrote:
> 
> On 09/08/2017 11:38 PM, Steven Knudsen wrote:
>> Let me see if I understand you correctly. Are you saying that turning on the 
>> first reception takes significant time?
>> 
>> In my case, the receive loop is running before the scheduling starts; it 
>> just times out and spins around again. So I am not sure that the receive 
>> startup is the problem.
>> 
>> But if it was, then could I simply not do a “dummy” receive and then start 
>> scheduling?
>> 
>> I can live with the 350 us pre-receive dead time.
>> 
>> However, without digging into the FPGA code, I would have thought that a 
>> command queue would be periodically checked to see if the next command time 
>> has arrived and then the waiting command would execute right away. But, if 
>> there was a set number of events that needed to execute for a reception, 
>> that might explain things. E.g., if things were disabled after 
>> NUM_SAMPS_AND_DONE and needed to be re-enabled the next time receive is 
>> invoked, that could and would take time.
>> 
>> At this point I am just speculating…
>> 
>> steven
>> 
>> PS You work long hours!
>> 
>> 
> In general, starting the receive process takes a finite amount of time, 
> because various bits of hardware need to be turned (back) on.   Tuning is the 
> most
>   notoriously slow part in the B2xx series--the chip wasn't designed for 
> frequency-hopping, for example.  But other bits and pieces need to be 
> initialized.
>   I'm just not sure how many such bits and pieces and what the latency is.
> 
> 

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-08 Thread Steven Knudsen via USRP-users
I am not sure that is an option. The TDMA scheme is such that the TX/RX to 
inactive time duty cycle is low. During the “inactive” time, other things  are 
going on. 

I suppose maybe I could just turn the RX gain to zero (and will be anyway)

But what you are really suggesting is more polling  vs “interrupt” driven 
programming. Polling is generally not a way I like to go, especially when a) I 
can avoid it and b) I may be asked to host this on lesser hardware…

Put another way, if polling was okay, why have the ability to schedule receives?

Thanks for all the discussion and help!

steven



Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
erreichen. - Franz Kafka

> On Sep 8, 2017, at 22:01, Marcus D. Leech  wrote:
> 
> On 09/08/2017 11:53 PM, Steven Knudsen wrote:
>> Okay, I can accept that.
>> 
>> What it means is that since my customer has not settled on a USRP platform, 
>> I should put some checks in that enforce timing constraints. Stuff like 
>> looking at the number of samples for the max expected packet length times 
>> the sample rate and comparing to a max allowable packet time. Of course, I 
>> have some such checks already, but I guess not enough… Too bad…
>> 
>> thanks,
>> 
>> steven
> I'll point out that one could simply just receive all the time, and only pay 
> attention to the RX samples that are within the RX timeslots
> 
> 
>> 
>> 
>> Steven Knudsen, Ph.D., P.Eng.
>> www. techconficio.ca 
>> www.linkedin.com/in/knudstevenknudsen 
>> 
>> 
>> Der entscheidende Augenblick der menschlichen Entwicklung ist immerwährend. 
>> Darum sind die revolutionären geistigen Bewegungen, welche alles Frühere für 
>> nichtig erklären, im Recht, denn es ist noch nichts geschehen. - Franz Kafka
>> 
>>> On Sep 8, 2017, at 21:47, Marcus D. Leech >> > wrote:
>>> 
>>> On 09/08/2017 11:38 PM, Steven Knudsen wrote:
 Let me see if I understand you correctly. Are you saying that turning on 
 the first reception takes significant time?
 
 In my case, the receive loop is running before the scheduling starts; it 
 just times out and spins around again. So I am not sure that the receive 
 startup is the problem.
 
 But if it was, then could I simply not do a “dummy” receive and then start 
 scheduling?
 
 I can live with the 350 us pre-receive dead time.
 
 However, without digging into the FPGA code, I would have thought that a 
 command queue would be periodically checked to see if the next command 
 time has arrived and then the waiting command would execute right away. 
 But, if there was a set number of events that needed to execute for a 
 reception, that might explain things. E.g., if things were disabled after 
 NUM_SAMPS_AND_DONE and needed to be re-enabled the next time receive is 
 invoked, that could and would take time.
 
 At this point I am just speculating…
 
 steven
 
 PS You work long hours!
 
 
>>> In general, starting the receive process takes a finite amount of time, 
>>> because various bits of hardware need to be turned (back) on.   Tuning is 
>>> the most
>>>   notoriously slow part in the B2xx series--the chip wasn't designed for 
>>> frequency-hopping, for example.  But other bits and pieces need to be 
>>> initialized.
>>>   I'm just not sure how many such bits and pieces and what the latency is.
>>> 
>>> 
>> 
> 

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-15 Thread Steven Knudsen via USRP-users
Hi again, Marcus (and all).

I have continued to try and sort out this problem and may be able to provide 
some more insight, or at least failed approaches.

Another reader suggested that using STREAM_MODE_NUM_SAMPS_AND_MORE may help if 
there is a setup or latency issue. So, I changed the reception scheduling to 
issue two stream commands, the first with the number of samples to receive, 
STREAM_MODE_NUM_SAMPS_AND_MORE, and a time spec, and the second stream command 
with STREAM_MODE_NUM_SAMPS_AND_DONE and the number of samples to receive = 1. 
In a receive-only test that approach worked fine.

However, when I put it back into the TDMA radio where transmissions are 
scheduled in slot 0 followed by scheduled receptions in slots 1, 2, and 3, I 
see the same problem. The transmitted signal is distorted or attenuated. 
Comment out the recv function, and all is well again.

I wondered about some kind of buffer overlap, so tried really short receptions. 
For example, a full slot represents about 15000 samples. If I set the stream 
command number of samples to receive to 500, everything works properly. Samples 
are received as and when expected and the transmitted waveform (as seen on a 
scope) looks fine. Up the number of samples to receive to 5000, and back to the 
ugliness.

So, either there is some kind of “collision” between the tx and rx streamers 
because of the rx number of samples to receive, or there is a shortage of time 
between scheduled receptions that somehow affects the transmit waveform.

I also looked at the UHD example txrx_loopback_to_file.cpp for any clues, but 
nothing popped out at me. It is, of course, a little different in that the 
transmit worker operates in its own thread while the recv operates in a 
separate thread and neither is scheduled (well, the receive is delayed a bit to 
allow “settling”). But what I take away from it is that transmit and receive 
can run simultaneously without interfering with each other. Indeed, when I look 
at the transmit waveform on a scope it stays the same before and after the 
receive starts.

Don’t know if any of the above will twig anything for you, but do appreciate 
all your help so far.

thanks,

steven

Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
erreichen. - Franz Kafka

> On Sep 8, 2017, at 22:22, Marcus D. Leech  wrote:
> 
> On 09/09/2017 12:16 AM, Steven Knudsen wrote:
>> I am not sure that is an option. The TDMA scheme is such that the TX/RX to 
>> inactive time duty cycle is low. During the “inactive” time, other things  
>> are going on. 
>> 
>> I suppose maybe I could just turn the RX gain to zero (and will be anyway)
>> 
>> But what you are really suggesting is more polling  vs “interrupt” driven 
>> programming. Polling is generally not a way I like to go, especially when a) 
>> I can avoid it and b) I may be asked to host this on lesser hardware…
>> 
>> Put another way, if polling was okay, why have the ability to schedule 
>> receives?
> Oh, I agree that it's a useful feature.  I'm just not certain of the 
> hardware's ability to maintain sub-ms-scale scheduling abilities on the B210.
> 
> 
>> 
>> Thanks for all the discussion and help!
>> 
>> steven
>> 
>> 
>> 
>> Steven Knudsen, Ph.D., P.Eng.
>> www. techconficio.ca 
>> www.linkedin.com/in/knudstevenknudsen 
>> 
>> 
>> Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
>> erreichen. - Franz Kafka
>> 
>>> On Sep 8, 2017, at 22:01, Marcus D. Leech >> > wrote:
>>> 
>>> On 09/08/2017 11:53 PM, Steven Knudsen wrote:
 Okay, I can accept that.
 
 What it means is that since my customer has not settled on a USRP 
 platform, I should put some checks in that enforce timing constraints. 
 Stuff like looking at the number of samples for the max expected packet 
 length times the sample rate and comparing to a max allowable packet time. 
 Of course, I have some such checks already, but I guess not enough… Too 
 bad…
 
 thanks,
 
 steven
>>> I'll point out that one could simply just receive all the time, and only 
>>> pay attention to the RX samples that are within the RX timeslots
>>> 
>>> 
 
 
 Steven Knudsen, Ph.D., P.Eng.
 www. techconficio.ca 
 www.linkedin.com/in/knudstevenknudsen 
 
 
 Der entscheidende Augenblick der menschlichen Entwicklung ist 
 immerwährend. Darum sind die revolutionären geistigen Bewegungen, welche 
 alles Frühere für nichtig erklären, im Recht, denn es ist noch nichts 
 geschehen. - Franz Kafka
 
> On Sep 8, 2017, at 21:47, Marcus D. Leech  > wrote:
> 
> On 09/08/2017 11:38 PM, Steven Knudsen wrote:
>> Let me

Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-16 Thread Steven Knudsen via USRP-users
Hi yet again…I thought to see what would happen if I modified txrx_loopback_to_file.cpp to output scheduled transmit bursts.I modified the transmit worker function to schedule bursts. Up to the settling time (i.e., when the recv starts grabbing samples), the output on the scope looks as expected — a series of 10 ms bursts from the wavetable with 10 ms of nothing in between. Once recv starts, the bursts all go to zero.The command line execution is;./tests/txrx_loopback_to_file --tx-rate=1e6 --rx-rate=1e6 --tx-freq=70e6 --rx-freq=70e6 —settling=20 --tx-gain=70 --wave-type=SINE --wave-freq=1 --rx-gain=40 --nsamps=0 --type=double --spb=1The source is below and attached. No changes were made to txrx_loopback_to_file.cpp other than those attached. Running on a B200mini.I suppose the next thing to try is an older version of the UHD./*** * transmit_worker function * A function to be used as a boost::thread_group thread for transmitting **/void transmit_worker(    std::vector > buff,    wave_table_class wave_table,    uhd::tx_streamer::sptr tx_streamer,    uhd::tx_metadata_t metadata,    size_t step,    size_t index,    int num_channels){    std::vector *> buffs(num_channels, &buff.front());    struct timespec deadline, time_remaining;    deadline.tv_sec = 0;    deadline.tv_nsec = 1000; // 10 ms    double packetTime = 2.0; // seconds into the future    int sleep_return = clock_nanosleep (CLOCK_MONOTONIC, 0, &deadline,      &time_remaining);    printf("buff size = %ld\n",buff.size());    //send data until the signal handler gets called    while(not stop_signal_called){      printf("sending packet at %10.8g\n",packetTime);      metadata.start_of_burst = true;      metadata.end_of_burst = false;      metadata.has_time_spec = true;      metadata.time_spec = uhd::time_spec_t(packetTime);      //fill the buffer with the waveform        for (size_t n = 0; n < buff.size(); n++){            buff[n] = wave_table(index += step);        }        //send the entire contents of the buffer        tx_streamer->send(buffs, buff.size(), metadata);        sleep_return = clock_nanosleep (CLOCK_MONOTONIC, 0, &deadline,          &time_remaining);        // send a length zero packet to end the burst        metadata.start_of_burst = false;        metadata.end_of_burst = true;        metadata.has_time_spec = false;        tx_streamer->send ("", 0, metadata);        packetTime += 0.02;    }    //send a mini EOB packet    metadata.end_of_burst = true;    tx_streamer->send("", 0, metadata);}/***
 * transmit_worker function
 * A function to be used as a boost::thread_group thread for transmitting
 **/
void transmit_worker(
std::vector > buff,
wave_table_class wave_table,
uhd::tx_streamer::sptr tx_streamer,
uhd::tx_metadata_t metadata,
size_t step,
size_t index,
int num_channels
){
std::vector *> buffs(num_channels, &buff.front());

struct timespec deadline, time_remaining;
deadline.tv_sec = 0;
deadline.tv_nsec = 1000; // 10 ms

double packetTime = 2.0; // seconds into the future

int sleep_return = clock_nanosleep (CLOCK_MONOTONIC, 0, &deadline,
  &time_remaining);

printf("buff size = %ld\n",buff.size());

//send data until the signal handler gets called
while(not stop_signal_called){
  printf("sending packet at %10.8g\n",packetTime);
  metadata.start_of_burst = true;
  metadata.end_of_burst = false;
  metadata.has_time_spec = true;
  metadata.time_spec = uhd::time_spec_t(packetTime);

  //fill the buffer with the waveform
for (size_t n = 0; n < buff.size(); n++){
buff[n] = wave_table(index += step);
}

//send the entire contents of the buffer
tx_streamer->send(buffs, buff.size(), metadata);
sleep_return = clock_nanosleep (CLOCK_MONOTONIC, 0, &deadline,
  &time_remaining);

// send a length zero packet to end the burst
metadata.start_of_burst = false;
metadata.end_of_burst = true;
metadata.has_time_spec = false;
tx_streamer->send ("", 0, metadata);

packetTime += 0.02;
}

//send a mini EOB packet
metadata.end_of_burst = true;
tx_streamer->send("", 0, metadata);
}

./tests/txrx_loopback_to_file --tx-rate=1e6 --rx-rate=1e6 --tx-freq=70e6 
--rx-freq=70e6 --settling=5 --tx-gain=70 --wave-type=SINE --wave-freq=1 
--rx-gain=40 --nsamps=0 --type=double --spb=1

Steven Knudsen, Ph.D., P.Eng.www. techconficio.cawww.linkedin.com/in/knudstevenknudsenSo fest wie die Hand den Stein hält. Sie hält ihn aber fest, nur um ihn desto weiter zu verwerfen. Aber auch in jene Weite führt der Weg. - Franz Kafka	

On Sep 15, 2017, at 16:16, Steven Knudsen 

Re: [USRP-users] B200mini scheduled RX & TX; Tx attenuated

2017-09-16 Thread Steven Knudsen via USRP-users
No change using UHD 3.9.7 :-(


On September 16, 2017 at 07:19:05, Steven Knudsen (ee.k...@gmail.com) wrote:


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

So fest wie die Hand den Stein hält. Sie hält ihn aber fest, nur um ihn desto 
weiter zu verwerfen. Aber auch in jene Weite führt der Weg. - Franz Kafka

On Sep 15, 2017, at 16:16, Steven Knudsen  wrote:

Hi again, Marcus (and all).

I have continued to try and sort out this problem and may be able to provide 
some more insight, or at least failed approaches.

Another reader suggested that using STREAM_MODE_NUM_SAMPS_AND_MORE may help if 
there is a setup or latency issue. So, I changed the reception scheduling to 
issue two stream commands, the first with the number of samples to receive, 
STREAM_MODE_NUM_SAMPS_AND_MORE, and a time spec, and the second stream command 
with STREAM_MODE_NUM_SAMPS_AND_DONE and the number of samples to receive = 1. 
In a receive-only test that approach worked fine.

However, when I put it back into the TDMA radio where transmissions are 
scheduled in slot 0 followed by scheduled receptions in slots 1, 2, and 3, I 
see the same problem. The transmitted signal is distorted or attenuated. 
Comment out the recv function, and all is well again.

I wondered about some kind of buffer overlap, so tried really short receptions. 
For example, a full slot represents about 15000 samples. If I set the stream 
command number of samples to receive to 500, everything works properly. Samples 
are received as and when expected and the transmitted waveform (as seen on a 
scope) looks fine. Up the number of samples to receive to 5000, and back to the 
ugliness.

So, either there is some kind of “collision” between the tx and rx streamers 
because of the rx number of samples to receive, or there is a shortage of time 
between scheduled receptions that somehow affects the transmit waveform.

I also looked at the UHD example txrx_loopback_to_file.cpp for any clues, but 
nothing popped out at me. It is, of course, a little different in that the 
transmit worker operates in its own thread while the recv operates in a 
separate thread and neither is scheduled (well, the receive is delayed a bit to 
allow “settling”). But what I take away from it is that transmit and receive 
can run simultaneously without interfering with each other. Indeed, when I look 
at the transmit waveform on a scope it stays the same before and after the 
receive starts.

Don’t know if any of the above will twig anything for you, but do appreciate 
all your help so far.

thanks,

steven

Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
erreichen. - Franz Kafka

On Sep 8, 2017, at 22:22, Marcus D. Leech  wrote:

On 09/09/2017 12:16 AM, Steven Knudsen wrote:
I am not sure that is an option. The TDMA scheme is such that the TX/RX to 
inactive time duty cycle is low. During the “inactive” time, other things  are 
going on. 

I suppose maybe I could just turn the RX gain to zero (and will be anyway)

But what you are really suggesting is more polling  vs “interrupt” driven 
programming. Polling is generally not a way I like to go, especially when a) I 
can avoid it and b) I may be asked to host this on lesser hardware…

Put another way, if polling was okay, why have the ability to schedule receives?
Oh, I agree that it's a useful feature.  I'm just not certain of the hardware's 
ability to maintain sub-ms-scale scheduling abilities on the B210.



Thanks for all the discussion and help!

steven



Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Von einem gewissen Punkt an gibt es keine Rückkehr mehr. Dieser Punkt ist zu 
erreichen. - Franz Kafka

On Sep 8, 2017, at 22:01, Marcus D. Leech  wrote:

On 09/08/2017 11:53 PM, Steven Knudsen wrote:
Okay, I can accept that.

What it means is that since my customer has not settled on a USRP platform, I 
should put some checks in that enforce timing constraints. Stuff like looking 
at the number of samples for the max expected packet length times the sample 
rate and comparing to a max allowable packet time. Of course, I have some such 
checks already, but I guess not enough… Too bad…

thanks,

steven
I'll point out that one could simply just receive all the time, and only pay 
attention to the RX samples that are within the RX timeslots




Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

Der entscheidende Augenblick der menschlichen Entwicklung ist immerwährend. 
Darum sind die revolutionären geistigen Bewegungen, welche alles Frühere für 
nichtig erklären, im Recht, denn es ist noch nichts geschehen. - Franz Kafka

On Sep 8, 2017, at 21:47, Marcus D. Leech  wrote:

On 09/08/2017 11:38 PM, Steven Knudsen wrote:
Let me see if I understand you correctly. Are you saying that turning on the 
fir

Re: [USRP-users] polyphase clock sync eats 100% cpu once it gets samples

2017-10-04 Thread Steven Knudsen via USRP-users
Hey Michael,

throwing in my 2 cents, I found the same last year and implemented separate 
logic to monitor RSSI and set thresholds. In my case I was receiving in 
consecutive slots from different radios, so the RSSI varied a lot. 

You also have to be careful with any loop tracking algorithms for the same 
reason… not always receiving from the same transmitter can mess things up.

steven


On October 4, 2017 at 08:21:29, Michael Wentz via USRP-users 
(usrp-users@lists.ettus.com) wrote:

Hi,

I ran into a similar problem several months ago - what I found was that the 
correlation estimator produced a *huge* number of false positives (and a tag 
for each of them) which caused the clock recovery block to be super 
overwhelmed. If I recall correctly, there were also some cases that these tags 
would send the clock recovery block into an infinite loop. We rolled back the 
correlation estimator to the previous version, which uses absolute thresholds, 
as a quick fix.

-Michael

On Tue, Oct 3, 2017 at 9:23 AM, Marcus Müller via USRP-users 
 wrote:
That's very interesting, indeed! If I had to infer (sorry, not right now on a 
computer where I can test) from the thread name "header_payload2", I'd say that 
for some reason that I don't know, the header/payload demuxer in packet_rx 
"spins" on something.

If you want to, throw some more runtime analysis at this: 

sudo apt-get install linux-tools
sudo sysctl -w kernel.perf_event_paranoid=-1
perf record -ag python /path/to/uhd_packet_rx.py

[let it run for a while, e.g. 30s, end it]

perf report

That should give you a quick insight in which function/code line your 
processors where stuck the most time. Maybe that brings us one step forward.

Best regards,
Marcus


On 03.10.2017 14:41, Vladimir Rytikov wrote:
Marcus,

cpu: Intel Core i7-7820HQ (Quad Core 2.90GHz, 3.90GHz Turbo, 8MB)
chipset: Intel Mobile CM238
RAM: 32GB (2x16GB) 2400MHz DDR4
OS: ubuntu 16.04 TLS
almost no other software running - only GNU Radio.

htop shows digital/packet/uhd_packet_rx.py takes ~192 % CPU.
header_payload2 - ~100%
pfb_clock_sync2 - ~90%

and the flow graph visually is frozen.
I run volk_profile - it saved some files in .volk directory.
I restarted the test again after it. the same result. htop shows the same 
numbers.

the RF signal has correct shape at the freq doman - I am splitting the signal 
to a spectrum analyzer- no overdriving the transmitter. the signal inside the 
coax cable. 433 MHz freq as in the example. visually looking at the Time tab - 
input signal within  -1 to 1. The recv dies at the moment when I press 'On' 
check box - it ungates receiver chain and polyphase clock sync block. 
transmitter sends bursts every 2 seconds.
the only modifications comparing to the vanila example - Clock/Time sources - I 
changed to Default instead of O/B GPSDO.

-- Vladimir

On Tue, Oct 3, 2017 at 4:57 AM, Marcus Müller via USRP-users 
 wrote:
Hi Vladimir,

synchronization is usually among the most CPU-intense things a receiver does 
(only, if at all, contested by channel decoding for complex codes). So, the 
100% CPU utilization don't sound totally unreasonable, depending on your system.

That being said, I don't want to rule out bugs, but for the time being, I'd 
declare this issue as "unclear, probably insufficient compute power".

Can you tell us a bit about your computer, in terms of CPU model, motherboard 
chipset, RAM configuration, OS? If you install and run "htop"¹, you'll see 
which block does how much without much complication, and maybe also significant 
non-GNU Radio CPU usage (for example, my mail client and my browser idling use 
*serious* amounts of CPU).

Another thing worth trying is to close all software that might be using CPU 
(check with htop!) and then run "volk_profile"; this should test a lot of 
hand-written implementations for certain math operations, which might 
significantly speed up the polyphase clock sync.

Best regards,

Marcus



¹: I'm assuming you're using windows; after starting htop, press F2 for setup, 
go into the "Display Options", enable "Show custom thread names", press Esc


On 03.10.2017 12:54, Vladimir Rytikov via USRP-users wrote:
Hi,

I am trying to run an example from GNU Radio - 
examples/digital/packet/uhd_packet_rx and uhd_packet_tx with real USPR radios 
connected via an attenuator and a coax cable.

When I enable receiver by clicking on 'On' check box - the whole RX flow graph 
freezes.
I think I manually adjusted transmit power and receive gain to be within 
reasonable ranges.

By disabling different blocks one at a time I found is that polyphase clock 
sync inside packet_rx blocks kills the flow graph. It seems like the signal is 
gaited by Correlation Estimator and once it gets the correct sync word - the 
signal goes to Polyphase Clock Sync and CPU dies - it is 100% loaded all the 
time.

If I run only loop back simulation the whole flow graph seems working fine. I 
wonder if noise or not very good signal destroys Polyphase Clock

Re: [USRP-users] polyphase clock sync eats 100% cpu once it gets samples

2017-10-08 Thread Steven Knudsen via USRP-users
FWIW, I found the dynamic threshold set in the corr-est to be far too low. I 
made some experiments using my custom sync sequence (a variant of m-sequences 
that gave me a couple of dB extra) and determined that except in degenerate 
cases, the core peak would be 8 times the average signal  for SNRs down to 
about 3 dB. So, I believe I set the threshold to be 4 times the average signal. 
This worked for me because in the TDMA system I had a good idea where the sync 
sequence might be. It may be okay for a non-TDMA system where you just look at 
a running history of samples, but I did not consider that scenario.

good luck,

steven


On October 8, 2017 at 15:19:07, Vladimir Rytikov (kk6...@gmail.com) wrote:

thanks everyone for help. I looked closer to correlation estimator output and 
it is the case. it does generate a lot of tags(false positive tags) in my case 
and  overwhelms the rest of the flow graph. I looked at the loopback flow graph 
- it has very different samples pattern than the radio flow graph. It doesn't 
have any noise in between packets.
In any case I will need to look closer to the issue and debug it. I will move 
the discussion to GNU Radio mail list once I get more details on this issue.

-- Vladimir

On Wed, Oct 4, 2017 at 8:16 AM, Steven Knudsen  wrote:
Hey Michael,

throwing in my 2 cents, I found the same last year and implemented separate 
logic to monitor RSSI and set thresholds. In my case I was receiving in 
consecutive slots from different radios, so the RSSI varied a lot. 

You also have to be careful with any loop tracking algorithms for the same 
reason… not always receiving from the same transmitter can mess things up.

steven


On October 4, 2017 at 08:21:29, Michael Wentz via USRP-users 
(usrp-users@lists.ettus.com) wrote:

Hi,

I ran into a similar problem several months ago - what I found was that the 
correlation estimator produced a *huge* number of false positives (and a tag 
for each of them) which caused the clock recovery block to be super 
overwhelmed. If I recall correctly, there were also some cases that these tags 
would send the clock recovery block into an infinite loop. We rolled back the 
correlation estimator to the previous version, which uses absolute thresholds, 
as a quick fix.

-Michael

On Tue, Oct 3, 2017 at 9:23 AM, Marcus Müller via USRP-users 
 wrote:
That's very interesting, indeed! If I had to infer (sorry, not right now on a 
computer where I can test) from the thread name "header_payload2", I'd say that 
for some reason that I don't know, the header/payload demuxer in packet_rx 
"spins" on something.

If you want to, throw some more runtime analysis at this: 

sudo apt-get install linux-tools
sudo sysctl -w kernel.perf_event_paranoid=-1
perf record -ag python /path/to/uhd_packet_rx.py

[let it run for a while, e.g. 30s, end it]

perf report

That should give you a quick insight in which function/code line your 
processors where stuck the most time. Maybe that brings us one step forward.

Best regards,
Marcus


On 03.10.2017 14:41, Vladimir Rytikov wrote:
Marcus,

cpu: Intel Core i7-7820HQ (Quad Core 2.90GHz, 3.90GHz Turbo, 8MB)
chipset: Intel Mobile CM238
RAM: 32GB (2x16GB) 2400MHz DDR4
OS: ubuntu 16.04 TLS
almost no other software running - only GNU Radio.

htop shows digital/packet/uhd_packet_rx.py takes ~192 % CPU.
header_payload2 - ~100%
pfb_clock_sync2 - ~90%

and the flow graph visually is frozen.
I run volk_profile - it saved some files in .volk directory.
I restarted the test again after it. the same result. htop shows the same 
numbers.

the RF signal has correct shape at the freq doman - I am splitting the signal 
to a spectrum analyzer- no overdriving the transmitter. the signal inside the 
coax cable. 433 MHz freq as in the example. visually looking at the Time tab - 
input signal within  -1 to 1. The recv dies at the moment when I press 'On' 
check box - it ungates receiver chain and polyphase clock sync block. 
transmitter sends bursts every 2 seconds.
the only modifications comparing to the vanila example - Clock/Time sources - I 
changed to Default instead of O/B GPSDO.

-- Vladimir

On Tue, Oct 3, 2017 at 4:57 AM, Marcus Müller via USRP-users 
 wrote:
Hi Vladimir,

synchronization is usually among the most CPU-intense things a receiver does 
(only, if at all, contested by channel decoding for complex codes). So, the 
100% CPU utilization don't sound totally unreasonable, depending on your system.

That being said, I don't want to rule out bugs, but for the time being, I'd 
declare this issue as "unclear, probably insufficient compute power".

Can you tell us a bit about your computer, in terms of CPU model, motherboard 
chipset, RAM configuration, OS? If you install and run "htop"¹, you'll see 
which block does how much without much complication, and maybe also significant 
non-GNU Radio CPU usage (for example, my mail client and my browser idling use 
*serious* amounts of CPU).

Another thing worth trying is to close all so

[USRP-users] X310 underflow in transmit-only configuration

2018-02-08 Thread Steven Knudsen via USRP-users
Hi,

I have been scratching my head for a while on this one…

I have made a TDMA radio that has a simple 4 slot cycle with a relatively low 
duty cycle (slots are 40% and the remaining 60% of the cycle the USRP is idle).

A radio transmits in it’s “owned” slot and receives in all others (3 of them). 
The transmit is timed as are the receptions in each slot. Transmit is schedule 
10 ms in advance (at the start of a cycle) and the receives are scheduled at 
least 6 ms in advance (at the end of the last receive slot).

When I test with a B200mini connected to an Octoclock G for 1 PPS reference, it 
runs flawlessly for hours (5 is the longest). When, on the exact same Linux 
host I run with an X310 connected to the same Octoclock G for 1 PPS and 10 MHz, 
it stops working after not too long with a slew of ’U’s and ’L’s

Trying to narrow things down, I created a version fo the radio that only 
transmits. Reception is completely disabled and I confirm that no receive 
commands are ever scheduled and rxStreamer->recv() is never called.

So, imagine my surprise when after a fairly long time of transmitting 
successfully (evidenced by using an oscilloscope to view packets), the 
transmit-only version fails!?! Below is a copy o the log showing the first 
evidence of failure, namely ’L’s indicating transmits were too late. But, what 
is the ‘U’ doing there? As I mentioned, no reception functionality is in the 
program, so what is going on?

Anyone else see this kind of thing? I never see it with the B200mini, but see 
it consistently with the X310.

thanks very much for your time and consideration,

steven

ULLsendFrame() MPDU #719935  mpdu size = 488 bytes at 1518123481s 89 us
ULLLsendFrame() MPDU #719936  mpdu size = 488 bytes at 1518123481s 90 us
UsendFrame() MPDU #719937  mpdu size = 488 bytes at 1518123481s 91 us
ULsendFrame() MPDU #719938  mpdu size = 488 bytes at 1518123481s 92 us
ULLsendFrame() MPDU #719939  mpdu size = 488 bytes at 1518123481s 93 us
ULLLsendFrame() MPDU #719940  mpdu size = 488 bytes at 1518123481s 94 us
UsendFrame() MPDU #719941  mpdu size = 488 bytes at 1518123481s 95 
us
ULsendFrame() MPDU #719942  mpdu size = 488 bytes at 1518123481s 96 
us
ULLsendFrame() MPDU #719943  mpdu size = 488 bytes at 1518123481s 
97 us
ULLLsendFrame() MPDU #719944  mpdu size = 488 bytes at 1518123481s 
98 us
UsendFrame() MPDU #719945  mpdu size = 488 bytes at 1518123481s 
99 us
ULsendFrame() MPDU #719946  mpdu size = 488 bytes at 1518123482s 0 
us
ULsendFrame() MPDU #719947  mpdu size = 488 bytes at 1518123482s 
1 us
ULLsendFrame() MPDU #719948  mpdu size = 488 bytes at 1518123482s 
2 us
ULLLsendFrame() MPDU #719949  mpdu size = 488 bytes at 1518123482s 
3 us
UsendFrame() MPDU #719950  mpdu size = 488 bytes at 1518123482s 
4 us
ULsendFrame() MPDU #719951  mpdu size = 488 bytes at 
1518123482s 5 us
ULLsendFrame() MPDU #719952  mpdu size = 488 bytes at 
1518123482s 6 us
ULLLsendFrame() MPDU #719953  mpdu size = 488 bytes at 
1518123482s 7 us


Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

All the wires are cut, my friends
Live beyond the severed ends.  Louis MacNeice
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] X310 underflow in transmit-only configuration

2018-02-08 Thread Steven Knudsen via USRP-users
Thanks, Ian.

I appreciate the explanation.

Certainly I’ve understood the idea of late as it was inevitable that while 
developing and testing my TDMA MAC I made mistakes and scheduled packets for 
transmission in the past :0)

The underflow is a little more mysterious. At the request of Nate, a sensible 
person, I’ve created a program to illustrate the problem; see attached.

Running it with the X310 connected to an Octoclock G for 1 PPS and 10 MHz, I 
have it transmit a 2 sample packet every 10 ms at 20 MSps. It manages to do 
that for about 1727 seconds before underflow happens and then it basically 
blocks.

Running it with a B200mini and all the same conditions/parameters, it has been 
going now for more than 8800 seconds.

It is purely speculative on my part, but I have to wonder if somehow there is a 
time problem, say where the clock discipline algorithm of the X310 induces a 
jump in time. However, I made another test program where I get host and USRP 
time once every second and compare. Over the course of an hour for the X310 the 
time difference was pretty much constant, about the difference in time it takes 
to execute the get_time_now() command for the X310. So, that kind of suggests 
the clock is pretty stable, or maybe I didn’t wait long enough...

I am kind of confident that the B200mini will keep going because, as mentioned 
below, it ran with the full Tx-Rx version of the program for more than 5 hours 
before I killed the program.


Maybe you will be able to suggest something to help me get to the bottom of 
this :0)

thanks!

steven

On February 8, 2018 at 23:57:57, Ian Buckley (i...@ionconcepts.com) wrote:

Steven
Underflow is a TX error phenomena, not an RX one. It is signaled when the local 
buffering of TX data in the USRP becomes empty whilst the USRP tries to 
continue to transmit.
Late, on the other hand, is signaled when a TX command contains a time that is 
later than the USRP’s local clock when it is executed.

Hope this is helpful,
-Ian

On Feb 8, 2018, at 1:33 PM, Steven Knudsen via USRP-users 
 wrote:

Hi,

I have been scratching my head for a while on this one…

I have made a TDMA radio that has a simple 4 slot cycle with a relatively low 
duty cycle (slots are 40% and the remaining 60% of the cycle the USRP is idle).

A radio transmits in it’s “owned” slot and receives in all others (3 of them). 
The transmit is timed as are the receptions in each slot. Transmit is schedule 
10 ms in advance (at the start of a cycle) and the receives are scheduled at 
least 6 ms in advance (at the end of the last receive slot).

When I test with a B200mini connected to an Octoclock G for 1 PPS reference, it 
runs flawlessly for hours (5 is the longest). When, on the exact same Linux 
host I run with an X310 connected to the same Octoclock G for 1 PPS and 10 MHz, 
it stops working after not too long with a slew of ’U’s and ’L’s

Trying to narrow things down, I created a version fo the radio that only 
transmits. Reception is completely disabled and I confirm that no receive 
commands are ever scheduled and rxStreamer->recv() is never called.

So, imagine my surprise when after a fairly long time of transmitting 
successfully (evidenced by using an oscilloscope to view packets), the 
transmit-only version fails!?! Below is a copy o the log showing the first 
evidence of failure, namely ’L’s indicating transmits were too late. But, what 
is the ‘U’ doing there? As I mentioned, no reception functionality is in the 
program, so what is going on?

Anyone else see this kind of thing? I never see it with the B200mini, but see 
it consistently with the X310.

thanks very much for your time and consideration,

steven

ULLsendFrame() MPDU #719935  mpdu size = 488 bytes at 1518123481s 89 us
ULLLsendFrame() MPDU #719936  mpdu size = 488 bytes at 1518123481s 90 us
UsendFrame() MPDU #719937  mpdu size = 488 bytes at 1518123481s 91 us
ULsendFrame() MPDU #719938  mpdu size = 488 bytes at 1518123481s 92 us
ULLsendFrame() MPDU #719939  mpdu size = 488 bytes at 1518123481s 93 us
ULLLsendFrame() MPDU #719940  mpdu size = 488 bytes at 1518123481s 94 us
UsendFrame() MPDU #719941  mpdu size = 488 bytes at 1518123481s 95 
us
ULsendFrame() MPDU #719942  mpdu size = 488 bytes at 1518123481s 96 
us
ULLsendFrame() MPDU #719943  mpdu size = 488 bytes at 1518123481s 
97 us
ULLLsendFrame() MPDU #719944  mpdu size = 488 bytes at 1518123481s 
98 us
UsendFrame() MPDU #719945  mpdu size = 488 bytes at 1518123481s 
99 us
ULsendFrame() MPDU #719946  mpdu size = 488 bytes at 1518123482s 0 
us
ULsendFrame() MPDU #719947  mpdu size = 488 bytes at 1518123482s 
1 us
ULLsendFrame() MPDU #719948  mpdu size = 488 bytes at 1518123482s 
2 us
ULLLsendFrame() MPDU #719949  mpdu size = 488 bytes at 1518123482s 
3 us
UsendFrame() MPDU #719950  mpdu s

Re: [USRP-users] Bandpass filters

2018-02-09 Thread Steven Knudsen via USRP-users
Hi Mathieu,

You can certainly use the UHD to get the signal of interest, but what you are 
asking is really a question of signal processing and independent of the UHD and 
any USRP you may be using.

I suggest you focus on the signal processing problem of separating two 
closely-spaced signals and maybe do some pencil and paper work and then 
simulation using MATLAB, Octave, SciLab, or even plain old C/C++. Whatever you 
come up with you can then plug into a UHD-based program that receives samples. 
Personally, I’d just copy one fo the receive examples and start modifying it to 
accept your algorithms.

If you have not already found it, a good place to start is this book

As a person new to this list, please consider that its focus is on things 
directly related to USRPs and UHD. Your question is probably better posed at 
Signal Processing Stack Exchange

good luck,

steven



Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

All the wires are cut, my friends
Live beyond the severed ends.  Louis MacNeice

On February 9, 2018 at 08:57:41, Mathieu Petitjean via USRP-users 
(usrp-users@lists.ettus.com) wrote:

Hi everyone,  

As a very newbie with SDRs, I want to use an USRP to listen to two  
different, but close frequencies and distinguish them (the frequencies  
are not fixed yet). I would like to implement two bandpass filters and  
look at the output of both filters, but I am not sure how to proceed. Is  
it feasible using the UHD C++ libraries ?  

Thanks,  

Mathieu  



___  
USRP-users mailing list  
USRP-users@lists.ettus.com  
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com  
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] X310 underflow in transmit-only configuration

2018-02-13 Thread Steven Knudsen via USRP-users
 there is a 
time problem, say where the clock discipline algorithm of the X310 induces a 
jump in time. However, I made another test program where I get host and USRP 
time once every second and compare. Over the course of an hour for the X310 the 
time difference was pretty much constant, about the difference in time it takes 
to execute the get_time_now() command for the X310. So, that kind of suggests 
the clock is pretty stable, or maybe I didn’t wait long enough...

I am kind of confident that the B200mini will keep going because, as mentioned 
below, it ran with the full Tx-Rx version of the program for more than 5 hours 
before I killed the program.


Maybe you will be able to suggest something to help me get to the bottom of 
this :0)

thanks!

steven

On February 8, 2018 at 23:57:57, Ian Buckley (i...@ionconcepts.com) wrote:

Steven
Underflow is a TX error phenomena, not an RX one. It is signaled when the local 
buffering of TX data in the USRP becomes empty whilst the USRP tries to 
continue to transmit.
Late, on the other hand, is signaled when a TX command contains a time that is 
later than the USRP’s local clock when it is executed.

Hope this is helpful,
-Ian

On Feb 8, 2018, at 1:33 PM, Steven Knudsen via USRP-users 
 wrote:

Hi,

I have been scratching my head for a while on this one…

I have made a TDMA radio that has a simple 4 slot cycle with a relatively low 
duty cycle (slots are 40% and the remaining 60% of the cycle the USRP is idle).

A radio transmits in it’s “owned” slot and receives in all others (3 of them). 
The transmit is timed as are the receptions in each slot. Transmit is schedule 
10 ms in advance (at the start of a cycle) and the receives are scheduled at 
least 6 ms in advance (at the end of the last receive slot).

When I test with a B200mini connected to an Octoclock G for 1 PPS reference, it 
runs flawlessly for hours (5 is the longest). When, on the exact same Linux 
host I run with an X310 connected to the same Octoclock G for 1 PPS and 10 MHz, 
it stops working after not too long with a slew of ’U’s and ’L’s

Trying to narrow things down, I created a version fo the radio that only 
transmits. Reception is completely disabled and I confirm that no receive 
commands are ever scheduled and rxStreamer->recv() is never called.

So, imagine my surprise when after a fairly long time of transmitting 
successfully (evidenced by using an oscilloscope to view packets), the 
transmit-only version fails!?! Below is a copy o the log showing the first 
evidence of failure, namely ’L’s indicating transmits were too late. But, what 
is the ‘U’ doing there? As I mentioned, no reception functionality is in the 
program, so what is going on?

Anyone else see this kind of thing? I never see it with the B200mini, but see 
it consistently with the X310.

thanks very much for your time and consideration,

steven

ULLsendFrame() MPDU #719935  mpdu size = 488 bytes at 1518123481s 89 us
ULLLsendFrame() MPDU #719936  mpdu size = 488 bytes at 1518123481s 90 us
UsendFrame() MPDU #719937  mpdu size = 488 bytes at 1518123481s 91 us
ULsendFrame() MPDU #719938  mpdu size = 488 bytes at 1518123481s 92 us
ULLsendFrame() MPDU #719939  mpdu size = 488 bytes at 1518123481s 93 us
ULLLsendFrame() MPDU #719940  mpdu size = 488 bytes at 1518123481s 94 us
UsendFrame() MPDU #719941  mpdu size = 488 bytes at 1518123481s 95 
us
ULsendFrame() MPDU #719942  mpdu size = 488 bytes at 1518123481s 96 
us
ULLsendFrame() MPDU #719943  mpdu size = 488 bytes at 1518123481s 
97 us
ULLLsendFrame() MPDU #719944  mpdu size = 488 bytes at 1518123481s 
98 us
UsendFrame() MPDU #719945  mpdu size = 488 bytes at 1518123481s 
99 us
ULsendFrame() MPDU #719946  mpdu size = 488 bytes at 1518123482s 0 
us
ULsendFrame() MPDU #719947  mpdu size = 488 bytes at 1518123482s 
1 us
ULLsendFrame() MPDU #719948  mpdu size = 488 bytes at 1518123482s 
2 us
ULLLsendFrame() MPDU #719949  mpdu size = 488 bytes at 1518123482s 
3 us
UsendFrame() MPDU #719950  mpdu size = 488 bytes at 1518123482s 
4 us
ULsendFrame() MPDU #719951  mpdu size = 488 bytes at 
1518123482s 5 us
ULLsendFrame() MPDU #719952  mpdu size = 488 bytes at 
1518123482s 6 us
ULLLsendFrame() MPDU #719953  mpdu size = 488 bytes at 
1518123482s 7 us



Steven Knudsen, Ph.D., P.Eng.
www. techconficio.ca
www.linkedin.com/in/knudstevenknudsen

All the wires are cut, my friends
Live beyond the severed ends.  Louis MacNeice

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com



tx_timed_loop.cpp
Description: Binary data
___
USRP-users mailing list
USRP-users@lists.ettus.com
h

Re: [USRP-users] Spikes at beginning and end of transmission

2018-02-25 Thread Steven Knudsen via USRP-users
Hi,

This is a known problem with the B20xmini. Have a look at this thread and
then follow it up.

You are right, you are seeing caps charge and discharge, basically.

Since I made the changes myself on my units, I am not sure where things
were left with Ettus and their policy for fixing the problem. But I can say
that once you make the changes, the transients are gone and the minis work
really well.

regards,

steven


On Sat, Feb 24, 2018 at 3:25 PM, Thomas Teisberg via USRP-users <
usrp-users@lists.ettus.com> wrote:

> I'm working on a radar project that requires sending a series of short
> pulses. Looking at the pulses on a scope, I'm seeing large voltage spikes
> at the beginning and the end of each transmission.
>
> As shown in the attached screenshot, before the transmission starts, there
> is a 90 mV spike lasting about 10 us. After the transmission, there's a -25
> mV spike lasting about 20 us.
>
> The setup for this is simply a USRP B205mini-i connected to a 50 ohm input
> on a scope through a 30 dB attenuator. The signal is at 435 MHz with the
> scope sampling at 20 Gsps.
>
> Our initial thought was that we must be seeing some effects of some of the
> RF frontend components turning on and off. I found this previous mailing
> list post and tried the suggested fix in b200_impl.cpp but nothing changed:
>
> http://lists.ettus.com/pipermail/usrp-users_lists.
> ettus.com/2015-January/012269.html
>
> Any ideas why we might be seeing these spikes or what we could do about
> it? Does the fix suggested in the above mailing list post still apply to
> the current codebase?
>
> Thanks,
> Thomas
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
>


-- 
K. Steven Knudsen, Ph.D., P.Eng.
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com