This is a note to let you know that I've just added the patch titled

    rbd: rbd workqueues need a resque worker

to the 3.17-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     rbd-rbd-workqueues-need-a-resque-worker.patch
and it can be found in the queue-3.17 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From 792c3a914910bd34302c5345578f85cfcb5e2c01 Mon Sep 17 00:00:00 2001
From: Ilya Dryomov <[email protected]>
Date: Fri, 10 Oct 2014 18:36:07 +0400
Subject: rbd: rbd workqueues need a resque worker

From: Ilya Dryomov <[email protected]>

commit 792c3a914910bd34302c5345578f85cfcb5e2c01 upstream.

Need to use WQ_MEM_RECLAIM for our workqueues to prevent I/O lockups
under memory pressure - we sit on the memory reclaim path.

Signed-off-by: Ilya Dryomov <[email protected]>
Tested-by: Micha Krause <[email protected]>
Reviewed-by: Sage Weil <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 drivers/block/rbd.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -5087,7 +5087,8 @@ static int rbd_dev_device_setup(struct r
        set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE);
        set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only);
 
-       rbd_dev->rq_wq = alloc_workqueue("%s", 0, 0, rbd_dev->disk->disk_name);
+       rbd_dev->rq_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0,
+                                        rbd_dev->disk->disk_name);
        if (!rbd_dev->rq_wq) {
                ret = -ENOMEM;
                goto err_out_mapping;


Patches currently in stable-queue which might be from [email protected] are

queue-3.17/rbd-rbd-workqueues-need-a-resque-worker.patch
queue-3.17/libceph-ceph-msgr-workqueue-needs-a-resque-worker.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to