[lttng-dev] [PATCH urcu 2/4] Fix: don't wait after completion of job batch if work queue is empty
On completion of a batch of jobs from the work queue, a wait of 10 ms (using poll()) is performed if there is no work present in the work queue before waiting on its futex. The work queue thread's structure is inspired by the call-rcu thread. In the context of the call-rcu thread, my understanding is that the intention is to ensure that the thread does not continuously wake-up to process a single queued item. This is fine as an application should not wait for a call-rcu job to be executed (or at least I don't see a use-case for that). In the context of the work queue, waiting for more work to be available artificially slows down the execution of work on which an application may wait. I have observed a case where LTTng's session daemon's shutdown is takes around 4 seconds as a large number of cds_lfht objects are destroyed. Removing the wait reduces the duration of this phase of the shut-down to almost ~10ms. Signed-off-by: Jérémie Galarneau --- src/workqueue.c | 1 - 1 file changed, 1 deletion(-) diff --git a/src/workqueue.c b/src/workqueue.c index 8561a7a..d609006 100644 --- a/src/workqueue.c +++ b/src/workqueue.c @@ -240,7 +240,6 @@ static void *workqueue_thread(void *arg) if (cds_wfcq_empty(&workqueue->cbs_head, &workqueue->cbs_tail)) { futex_wait(&workqueue->futex); - (void) poll(NULL, 0, 10); uatomic_dec(&workqueue->futex); /* * Decrement futex before reading -- 2.19.2 ___ lttng-dev mailing list lttng-dev@lists.lttng.org https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
[lttng-dev] [PATCH urcu 3/4] [RFC] Fix: don't wait after completion of a work queue job batch
As indicated in the previous patch of this series, waiting on completion of a job batch from the work queue artificially increases the latency of the work queue. The previous patch removed the wait that is performed when the work queue is observed to be empty and was observed as the cause of a performance problem. It is likely that waiting when the queue is observed to be non-empty is similarly unintended. Note that I have not observed such a problem myself. Signed-off-by: Jérémie Galarneau --- src/workqueue.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/src/workqueue.c b/src/workqueue.c index d609006..6822314 100644 --- a/src/workqueue.c +++ b/src/workqueue.c @@ -246,8 +246,6 @@ static void *workqueue_thread(void *arg) * call_rcu list. */ cmm_smp_mb(); - } else { - (void) poll(NULL, 0, 10); } } else { (void) poll(NULL, 0, 10); -- 2.19.2 ___ lttng-dev mailing list lttng-dev@lists.lttng.org https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
[lttng-dev] [PATCH urcu 4/4] [RFC] Fix: only wait if work queue is empty in real-time mode
Unconditionally waiting for 10 ms after the completion of every batch of jobs of the work queue in real-time mode appears to be a behaviour inherited from the call-rcu thread. While this is a fair trade-off in the context of call-rcu, it is less evident that it is desirable in the context of a general-purpose work queue. Waiting when work is available artificially degrades the latency characteristics of the work queue. There may be other trade-offs that I am not aware of. Hence, this is submitted as "RFC". Signed-off-by: Jérémie Galarneau --- src/workqueue.c | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/src/workqueue.c b/src/workqueue.c index 6822314..d81e381 100644 --- a/src/workqueue.c +++ b/src/workqueue.c @@ -248,7 +248,10 @@ static void *workqueue_thread(void *arg) cmm_smp_mb(); } } else { - (void) poll(NULL, 0, 10); + if (cds_wfcq_empty(&workqueue->cbs_head, + &workqueue->cbs_tail)) { + (void) poll(NULL, 0, 10); + } } if (workqueue->worker_after_wake_up_fct) workqueue->worker_after_wake_up_fct(workqueue, workqueue->priv); -- 2.19.2 ___ lttng-dev mailing list lttng-dev@lists.lttng.org https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
[lttng-dev] [PATCH urcu 1/4] Fix: mixup between URCU_WORKQUEUE_RT and URCU_CALL_RCU_RT
The work queue implementation is derived from the call-rcu thread. A number of references seem to have been left in place when adapting the code for its new purpose. The URCU_CALL_RCU_RT flag is used by wake_worker_thread() while the rest of the workqueue.c code uses URCU_WORKQUEUE_RT to determine if the work queue was configured in real-time mode. Both flags are defined to the same value (0x1) and the current internal user of the work queue (lfht) never specifies any flags. In practice, this does not cause any problem so I doubt this patch needs to be backported. Feel free to rename "Fix" to "Clean-up". There are a number of comments referencing call-rcu which may need to be adapted. Changing them is left as an exercise to the maintainer as I don't know if they apply to the workqueue thread. Signed-off-by: Jérémie Galarneau --- src/workqueue.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/workqueue.c b/src/workqueue.c index 17ea835..8561a7a 100644 --- a/src/workqueue.c +++ b/src/workqueue.c @@ -309,7 +309,7 @@ struct urcu_workqueue *urcu_workqueue_create(unsigned long flags, static void wake_worker_thread(struct urcu_workqueue *workqueue) { - if (!(_CMM_LOAD_SHARED(workqueue->flags) & URCU_CALL_RCU_RT)) + if (!(_CMM_LOAD_SHARED(workqueue->flags) & URCU_WORKQUEUE_RT)) futex_wake_up(&workqueue->futex); } -- 2.19.2 ___ lttng-dev mailing list lttng-dev@lists.lttng.org https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev