On Tue, Nov 27, 2012 at 8:08 AM, Torsten Kaiser
<just.for.l...@googlemail.com> wrote:
> On Tue, Nov 27, 2012 at 2:05 AM, NeilBrown <ne...@suse.de> wrote:
>> Can you test to see if this fixes it?
>
> Patch applied, I will try to get it stuck again.
> I don't have a reliable reproducers, but if the problem persists I
> will definitly report back here.

With this patch I was not able to recreate the hang. Lacking an 100%
way of recreating this, I can't be completely sure of the fix, but as
you understood from the code how this hang could happen, I'm quite
confident that the fix is working.

(As I do not use the raid10 personality only patching raid1.c was
sufficient for me, I didn't test the version that also patched
raid10.c as its not even compiled on my kernel.)

Thanks for the fix!

Torsten

>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>> index 636bae0..a0f7309 100644
>> --- a/drivers/md/raid1.c
>> +++ b/drivers/md/raid1.c
>> @@ -963,7 +963,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool 
>> from_schedule)
>>         struct r1conf *conf = mddev->private;
>>         struct bio *bio;
>>
>> -       if (from_schedule) {
>> +       if (from_schedule || current->bio_list) {
>>                 spin_lock_irq(&conf->device_lock);
>>                 bio_list_merge(&conf->pending_bio_list, &plug->pending);
>>                 conf->pending_count += plug->pending_cnt;
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to