Lukas Czerner <lczer...@redhat.com> writes: > Currently there is not limitation of number of requests in the loop bio > list. This can lead into some nasty situations when the caller spawns > tons of bio requests taking huge amount of memory. This is even more > obvious with discard where blkdev_issue_discard() will submit all bios > for the range and wait for them to finish afterwards. On really big loop > devices this can lead to OOM situation as reported by Dave Chinner. > > With this patch we will wait in loop_make_request() if the number of > bios in the loop bio list would exceed 'nr_requests' number of requests. > We'll wake up the process as we process the bios form the list.
I think you might want to do something similar to what is done for request_queues by implementing a congestion on and off threshold. As Jens writes in this commit (predating the conversion to git): Author: Jens Axboe <ax...@suse.de> Date: Wed Nov 3 15:47:37 2004 -0800 [PATCH] queue congestion threshold hysteresis We need to open the gap between congestion on/off a little bit, or we risk burning many cycles continually putting processes on a wait queue only to wake them up again immediately. This was observed with CFQ at least, which showed way excessive sys time. Patch is from Arjan. Signed-off-by: Jens Axboe <ax...@suse.de> Signed-off-by: Linus Torvalds <torva...@osdl.org> If you feel this isn't necessary, then I think you at least need to justify it with testing. Perhaps Jens can shed some light on the exact workload that triggered the pathological behaviour. Cheers, Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/