Since 4M GFP_NOIO is out of reach, we can just pre-alloc the required
memory.
In current iteration all reads are processed from one work item. Seems
like process_compressed_read() uses buffer only in the scope of it's
function, without submitting any io with it. Therefore there is no
concurrenc
On Mon, Nov 11, 2024 at 6:00 PM Pavel Tikhomirov
wrote:
> kvmalloc will not likely work.
In any case, use of kvmalloc in data path would be a bug as well,
it is to slow to be taken seriously.
> I agree about 4M will likely cause problem after memory overcommkeit /
> long uptime with high memory
On 11/11/24 17:44, Alexey Kuznetsov wrote:
[You don't often get email from alexey.n.kuznet...@gmail.com. Learn why this is
important at https://aka.ms/LearnAboutSenderIdentification ]
Hmm.
What are chances to get 4M with GFP_NOIO? I do not even need
to guess, I know - 0.
Damn, this is not a
Hmm.
What are chances to get 4M with GFP_NOIO? I do not even need
to guess, I know - 0.
Damn, this is not a legal code. Leave warn in place, it must complain.
If we cannot redo this using normal sized buffers, make a pool, make
a queue to wait for pool, or use kvmalloc, or whatever. But only not
Hello,
On 11.11.24 9:37, Pavel Tikhomirov wrote:
Disable the warning:
kernel: order 10 >= 10, gfp 0x40c00
kernel: WARNING: CPU: 5 PID: 182 at mm/page_alloc.c:5630
__alloc_pages+0x1d7/0x3f0
kernel: process_compressed_read+0x6f/0x590 [dm_qcow2]
As with 1M clusters and in case of zstd compressio
Disable the warning:
kernel: order 10 >= 10, gfp 0x40c00
kernel: WARNING: CPU: 5 PID: 182 at mm/page_alloc.c:5630
__alloc_pages+0x1d7/0x3f0
kernel: process_compressed_read+0x6f/0x590 [dm_qcow2]
As with 1M clusters and in case of zstd compression the buffer size
(clu_size + sizeof(ZSTD_DCtx) + ZS