From: Shaibal Dutta <shaibal.du...@broadcom.com> Garbage collector work does not have to be bound to the CPU that scheduled it. By moving work to the power-efficient workqueue, the selection of CPU executing the work is left to the scheduler. This extends idle residency times and conserves power.
This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected. Cc: "David S. Miller" <da...@davemloft.net> Cc: Alexey Kuznetsov <kuz...@ms2.inr.ac.ru> Cc: James Morris <jmor...@namei.org> Cc: Hideaki YOSHIFUJI <yoshf...@linux-ipv6.org> Cc: Patrick McHardy <ka...@trash.net> Signed-off-by: Shaibal Dutta <shaibal.du...@broadcom.com> [zoran.marko...@linaro.org: Rebased to latest kernel version. Added commit message. Fixed code alignment.] Signed-off-by: Zoran Markovic <zoran.marko...@linaro.org> --- net/ipv4/inetpeer.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c index 48f4244..7e3da6c6 100644 --- a/net/ipv4/inetpeer.c +++ b/net/ipv4/inetpeer.c @@ -161,7 +161,8 @@ static void inetpeer_gc_worker(struct work_struct *work) list_splice(&list, &gc_list); spin_unlock_bh(&gc_lock); - schedule_delayed_work(&gc_work, gc_delay); + queue_delayed_work(system_power_efficient_wq, + &gc_work, gc_delay); } /* Called from ip_output.c:ip_init */ @@ -576,7 +577,8 @@ static void inetpeer_inval_rcu(struct rcu_head *head) list_add_tail(&p->gc_list, &gc_list); spin_unlock_bh(&gc_lock); - schedule_delayed_work(&gc_work, gc_delay); + queue_delayed_work(system_power_efficient_wq, + &gc_work, gc_delay); } void inetpeer_invalidate_tree(struct inet_peer_base *base) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/