Since 4M GFP_NOIO is out of reach, we can just pre-alloc the required
memory.
In current iteration all reads are processed from one work item. Seems
like process_compressed_read() uses buffer only in the scope of it's
function, without submitting any io with it. Therefore there is no
concurrency and allocating a single buffer (probably kvmalloc will do
too) during alloc_qcow2_target() will do the trick for us.
On 11/11/24 08:37, Pavel Tikhomirov wrote:
Disable the warning:
kernel: order 10 >= 10, gfp 0x40c00
kernel: WARNING: CPU: 5 PID: 182 at mm/page_alloc.c:5630
__alloc_pages+0x1d7/0x3f0
kernel: process_compressed_read+0x6f/0x590 [dm_qcow2]
As with 1M clusters and in case of zstd compression the buffer size
(clu_size + sizeof(ZSTD_DCtx) + ZSTD_BLOCKSIZE_MAX + clu_size +
ZSTD_BLOCKSIZE_MAX + 64 = 2520776) can only fit into 4M (10-th order)
allocation.
https://virtuozzo.atlassian.net/browse/VSTOR-94596
Signed-off-by: Pavel Tikhomirov <ptikhomi...@virtuozzo.com>
Feature: dm-qcow2: ZSTD decompression
---
drivers/md/dm-qcow2-map.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/dm-qcow2-map.c b/drivers/md/dm-qcow2-map.c
index 6585f3fac6e7b..18f9442493831 100644
--- a/drivers/md/dm-qcow2-map.c
+++ b/drivers/md/dm-qcow2-map.c
@@ -3671,7 +3671,7 @@ static void process_compressed_read(struct qcow2 *qcow2,
struct list_head *read_
dctxlen = zlib_inflate_workspacesize();
- buf = kmalloc(qcow2->clu_size + dctxlen, GFP_NOIO);
+ buf = kmalloc(qcow2->clu_size + dctxlen, GFP_NOIO | __GFP_ORDER_NOWARN);
if (!buf) {
end_qios(read_list, BLK_STS_RESOURCE);
return;
_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel