This patch applies the introduced memalloc_noio_save() and
memalloc_noio_restore() to force memory allocation with no I/O
during runtime_resume callback.

Cc: Alan Stern <st...@rowland.harvard.edu>
Cc: Oliver Neukum <oneu...@suse.de>
Cc: Rafael J. Wysocki <r...@sisk.pl>
Signed-off-by: Ming Lei <ming....@canonical.com>
---
 drivers/base/power/runtime.c |   14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 3148b10..c71a8f0 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -503,6 +503,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
        int (*callback)(struct device *);
        struct device *parent = NULL;
        int retval = 0;
+       unsigned int noio_flag;
 
        trace_rpm_resume(dev, rpmflags);
 
@@ -652,7 +653,20 @@ static int rpm_resume(struct device *dev, int rpmflags)
        if (!callback && dev->driver && dev->driver->pm)
                callback = dev->driver->pm->runtime_resume;
 
+       /*
+        * Deadlock might be caused if memory allocation with GFP_KERNEL
+        * happens inside runtime_resume callback of one block device's
+        * ancestor or the block device itself. The easiest approach is
+        * to forbid I/O inside runtime_resume of all devices.
+        *
+        * In fact, it can be done only if the deivce is a block device
+        * or there is one block device descendant. But that may become
+        * complicated and not efficient because device tree traversing
+        * is involved.
+        */
+       memalloc_noio_save(noio_flag);
        retval = rpm_callback(callback, dev);
+       memalloc_noio_restore(noio_flag);
        if (retval) {
                __update_runtime_status(dev, RPM_SUSPENDED);
                pm_runtime_cancel_pending(dev);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to