Hmm. What are chances to get 4M with GFP_NOIO? I do not even need to guess, I know - 0.
Damn, this is not a legal code. Leave warn in place, it must complain. If we cannot redo this using normal sized buffers, make a pool, make a queue to wait for pool, or use kvmalloc, or whatever. But only not this. On Mon, Nov 11, 2024 at 3:41 PM Pavel Tikhomirov <ptikhomi...@virtuozzo.com> wrote: > > Disable the warning: > > kernel: order 10 >= 10, gfp 0x40c00 > kernel: WARNING: CPU: 5 PID: 182 at mm/page_alloc.c:5630 > __alloc_pages+0x1d7/0x3f0 > kernel: process_compressed_read+0x6f/0x590 [dm_qcow2] > > As with 1M clusters and in case of zstd compression the buffer size > (clu_size + sizeof(ZSTD_DCtx) + ZSTD_BLOCKSIZE_MAX + clu_size + > ZSTD_BLOCKSIZE_MAX + 64 = 2520776) can only fit into 4M (10-th order) > allocation. > > https://virtuozzo.atlassian.net/browse/VSTOR-94596 > Signed-off-by: Pavel Tikhomirov <ptikhomi...@virtuozzo.com> > > Feature: dm-qcow2: ZSTD decompression > --- > drivers/md/dm-qcow2-map.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/md/dm-qcow2-map.c b/drivers/md/dm-qcow2-map.c > index 6585f3fac6e7b..18f9442493831 100644 > --- a/drivers/md/dm-qcow2-map.c > +++ b/drivers/md/dm-qcow2-map.c > @@ -3671,7 +3671,7 @@ static void process_compressed_read(struct qcow2 > *qcow2, struct list_head *read_ > dctxlen = zlib_inflate_workspacesize(); > > > - buf = kmalloc(qcow2->clu_size + dctxlen, GFP_NOIO); > + buf = kmalloc(qcow2->clu_size + dctxlen, GFP_NOIO | > __GFP_ORDER_NOWARN); > if (!buf) { > end_qios(read_list, BLK_STS_RESOURCE); > return; > -- > 2.47.0 > > _______________________________________________ > Devel mailing list > Devel@openvz.org > https://lists.openvz.org/mailman/listinfo/devel _______________________________________________ Devel mailing list Devel@openvz.org https://lists.openvz.org/mailman/listinfo/devel