> I don’t think that recv_async_msg() blocks. So an infinite loop will chews up > CPU.
I had a separate thread that only called recv_async_msg(). I have unused cores on my machine, my reasoning was that I’d rather have a CPU busy looping than risk Ls on my transmit because the hot loop Tx thread had to handle a bunch of async messages in a row and create cascading issues. I dug into the recv_async_msg() implementation a little. There are a few implementations and I’m not that familiar with the UHD software architecture, but the majority of them essentially wrap a call to bounded_buffer::pop_with_timed_wait. `UHD_INLINE bool pop_with_timed_wait(elem_type &elem, double timeout)` ` {` ` boost::mutex::scoped_lock lock(_mutex);` ` if (_buffer.empty())` ` {` ` if (not _empty_cond.timed_wait(lock, to_time_dur(timeout),` ` _not_empty_fcn))` ` {` ` return false;` ` }` ` }` ` this->pop_back(elem);` ` _full_cond.notify_one();` ` return true;` ` }` It appears I was inadvertently adding contention over a mutex by putting this call in a separate thread. It looks like this function is also relatively lightweight as long as timeout is 0, so I’m going to try experimenting with putting recv_async_msg() in my main Tx thread. > I assume that redirection is not appropriate because your application uses > stdout? Correct. It doesn’t strictly need it, but I made the mistake of adding a debug feature that the users have become accustomed to that I now have to maintain…
_______________________________________________ USRP-users mailing list -- usrp-users@lists.ettus.com To unsubscribe send an email to usrp-users-le...@lists.ettus.com