:0.295955434 sec time_interval:295955434)
> - (invoke count:1000 tsc_interval:1065447105)
>
> Before:
> - Per elem: 110 cycles(tsc) 30.633 ns (step:64)
>
> Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
> ---
> mm
; Signed-off-by: Jesper Dangaard Brouer
> Signed-off-by: Mel Gorman
Acked-By: Vlastimil Babka
> ---
> mm/page_alloc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index be1e33a4df39..1ec18121268b 1
all users of the bulk API to allocate and manage enough
> storage to store the pages.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
ion is not very efficient and could be improved
> but it would require refactoring. The intent is to make it available early
> to determine what semantics are required by different callers. Once the
> full semantics are nailed down, it can be refactored.
>
> Signed-off-by: Mel Gorman
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 3/31/21 2:11 PM, Vlastimil Babka wrote:
> On 3/31/21 7:44 AM, Andrew Morton wrote:
>> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>>
>>> From: jun qian
>>>
>>> In our project, Many business delays come from fork, so
>>
On 3/31/21 7:44 AM, Andrew Morton wrote:
> On Mon, 29 Mar 2021 20:36:35 +0800 qianjun.ker...@gmail.com wrote:
>
>> From: jun qian
>>
>> In our project, Many business delays come from fork, so
>> we started looking for the reason why fork is time-consuming.
>> I used the ftrace with function_grap
can be refactored.
>
> Signed-off-by: Mel Gorman
Acked-by: Vlastimil Babka
Although maybe premature, if it changes significantly due to the users'
performance feedback, let's see :)
Some nits below:
...
> @@ -4963,6 +4978,107 @@ static inline bool prep
> tends to use the fake word "malloced" instead of the fake word mallocated.
> To be consistent, this preparation patch renames alloced to allocated
> in rmqueue_bulk so the bulk allocator and per-cpu allocator use similar
> names when the bulk allocator is introduced.
>
&
On 3/12/21 4:43 PM, Mel Gorman wrote:
> __alloc_pages updates GFP flags to enforce what flags are allowed
> during a global context such as booting or suspend. This patch moves the
> enforcement from __alloc_pages to prepare_alloc_pages so the code can be
> shared between the single page allocator
On 2/2/21 12:36 PM, Ioana Ciornei wrote:
> On Sun, Jan 31, 2021 at 03:44:23PM +0800, Kevin Hao wrote:
>> In the current implementation of page_frag_alloc(), it doesn't have
>> any align guarantee for the returned buffer address. But for some
>> hardwares they do require the DMA buffer to be aligned
rted-by: syzbot+d0bd96b4696c1ef67...@syzkaller.appspotmail.com
> Fixes: dde3c6b72a16 ("mm/slub: fix a memory leak in sysfs_slab_add()")
> Signed-off-by: Wang Hai
Cc:
Acked-by: Vlastimil Babka
Double-free is worse than a rare small memory leak. Which would still be nice
; are used in a network driver for the TX/RX. So introduce
> page_frag_alloc_align() to make sure that an aligned buffer address is
> returned.
>
> Signed-off-by: Kevin Hao
Acked-by: Vlastimil Babka
Agree with Jakub about static inline.
> --- a/mm/page_alloc.c
On 11/5/20 1:05 PM, Matthew Wilcox wrote:
On Thu, Nov 05, 2020 at 12:56:43PM +0100, Vlastimil Babka wrote:
> +++ b/mm/page_alloc.c
> @@ -5139,6 +5139,10 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>if (!page_ref_sub_and_test(page, nc->
On 11/5/20 5:21 AM, Matthew Wilcox (Oracle) wrote:
When the machine is under extreme memory pressure, the page_frag allocator
signals this to the networking stack by marking allocations with the
'pfmemalloc' flag, which causes non-essential packets to be dropped.
Unfortunately, even after the mac
users who use
regular krealloc() to reallocate arrays. Let's provide an actual
krealloc_array() implementation.
Signed-off-by: Bartosz Golaszewski
Makes sense.
Acked-by: Vlastimil Babka
---
include/linux/slab.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/include/li
On 10/13/20 10:09 AM, Mike Rapoport wrote:
We are not complaining about TCP using too much memory, but how do
we know that TCP uses a lot of memory. When I firstly face this problem,
I do not know who uses the 25GB memory and it is not shown in the /proc/meminfo.
If we can know the amount memory
On 04/19/2018 06:12 PM, Mikulas Patocka wrote:
> From: Mikulas Patocka
> Subject: [PATCH] kvmalloc: always use vmalloc if CONFIG_DEBUG_VM
>
> The kvmalloc function tries to use kmalloc and falls back to vmalloc if
> kmalloc fails.
>
> Unfortunatelly, some kernel code has bugs - it uses kvmalloc
On 12/12/2017 06:03 PM, syzbot wrote:
> Hello,
>
> syzkaller hit the following crash on
> 82bcf1def3b5f1251177ad47c44f7e17af039b4b
> git://git.cmpxchg.org/linux-mmots.git/master
> compiler: gcc (GCC) 7.1.1 20170620
> .config is attached
> Raw console output is attached.
> C reproducer is attache
On 04/05/2017 01:40 PM, Andrey Ryabinin wrote:
> On 04/05/2017 10:46 AM, Vlastimil Babka wrote:
>> The function __alloc_pages_direct_compact() sets PF_MEMALLOC to prevent
>> deadlock during page migration by lock_page() (see the comment in
>> __unmap_and_move()). Then it uncon
On 04/05/2017 01:36 PM, Richard Weinberger wrote:
> Michal,
>
> Am 05.04.2017 um 13:31 schrieb Michal Hocko:
>> On Wed 05-04-17 09:47:00, Vlastimil Babka wrote:
>>> Nandsim has own functions set_memalloc() and clear_memalloc() for robust
>>> setting and clearin
xes
(without the new helpers, to make backporting easier). Patch 2 introduces the
new helpers, modelled after existing memalloc_noio_* and memalloc_nofs_*
helpers, and converts mm core to use them. Patches 3 and 4 convert non-mm code.
Based on next-20170404.
Vlastimil Babka (4):
mm: prevent potent
Nandsim has own functions set_memalloc() and clear_memalloc() for robust
setting and clearing of PF_MEMALLOC. Replace them by the new generic helpers.
No functional change.
Signed-off-by: Vlastimil Babka
Cc: Boris Brezillon
Cc: Richard Weinberger
---
drivers/mtd/nand/nandsim.c | 29
where
PF_MEMALLOC is already set.
Fixes: a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling in
slowpath")
Reported-by: Andrey Ryabinin
Signed-off-by: Vlastimil Babka
Cc:
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/
similar to the existing
memalloc_noio_* and memalloc_nofs_* helpers. Convert existing setting/clearing
of PF_MEMALLOC within mm to the new helpers.
There are no known issues with the converted code, but the change makes it more
robust.
Suggested-by: Michal Hocko
Signed-off-by: Vlastim
We now have memalloc_noreclaim_{save,restore} helpers for robust setting and
clearing of PF_MEMALLOC. Let's convert the code which was using the generic
tsk_restore_flags(). No functional change.
Signed-off-by: Vlastimil Babka
Cc: Josef Bacik
Cc: Lee Duncan
Cc: Chris Leech
Cc: &qu
allback to
vmalloc esier now.
easier
Although it's somewhat of an euphemism when compared to "basically never" :)
Cc: Eric Dumazet
Cc: netdev@vger.kernel.org
Signed-off-by: Michal Hocko
Acked-by: Vlastimil Babka
---
net/core/dev.c | 24 +---
0
Acked-by: Dan Williams # nvdim
Acked-by: David Sterba # btrfs
Acked-by: Ilya Dryomov # Ceph
Acked-by: Tariq Toukan # mlx4
Signed-off-by: Michal Hocko
Acked-by: Vlastimil Babka
On 01/24/2017 04:00 PM, Michal Hocko wrote:
> Well, I am not opposed to kvmalloc_array but I would argue that this
> conversion cannot introduce new overflow issues. The code would have
> to be broken already because even though kmalloc_array checks for the
> overflow but vmalloc fallback doesn't
On 01/12/2017 06:37 PM, Michal Hocko wrote:
> On Thu 12-01-17 09:26:09, Kees Cook wrote:
>> On Thu, Jan 12, 2017 at 7:37 AM, Michal Hocko wrote:
> [...]
>>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>>> index 4f74511015b8..e6bbb33d2956 100644
>>> --- a/arch/s390/kvm/kvm-s390
On 01/04/2017 12:00 PM, Jesper Dangaard Brouer wrote:
>
> On Tue, 3 Jan 2017 17:07:49 +0100 Vlastimil Babka wrote:
>
>> On 12/20/2016 02:28 PM, Jesper Dangaard Brouer wrote:
>>> The focus in this patch is getting the API around page_pool figured out.
>>>
>
On 12/20/2016 02:28 PM, Jesper Dangaard Brouer wrote:
The focus in this patch is getting the API around page_pool figured out.
The internal data structures for returning page_pool pages is not optimal.
This implementation use ptr_ring for recycling, which is known not to scale
in case of multipl
On 09/28/2016 06:30 PM, David Laight wrote:
> From: Vlastimil Babka
>> Sent: 27 September 2016 12:51
> ...
>> Process name suggests it's part of db2 database. It seems it has to implement
>> its own interface to select() syscall, because glibc itself seems to have
On 09/27/2016 01:42 PM, Nicholas Piggin wrote:
On Tue, 27 Sep 2016 11:37:24 +
David Laight wrote:
From: Nicholas Piggin
> Sent: 27 September 2016 12:25
> On Tue, 27 Sep 2016 10:44:04 +0200
> Vlastimil Babka wrote:
>
>
> What's your customer doing with those selec
f fallback.
[eric.duma...@gmail.com: fix failure path logic]
[a...@linux-foundation.org: use proper type for size]
Signed-off-by: Vlastimil Babka
---
fs/select.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/fs/select.c b/fs/select.c
index 8ed9da50896a..3d4f85defeab 10
On 09/23/2016 06:47 PM, Jason Baron wrote:
Hi,
On 09/23/2016 03:24 AM, Nicholas Piggin wrote:
On Fri, 23 Sep 2016 14:42:53 +0800
"Hillf Danton" wrote:
The select(2) syscall performs a kmalloc(size, GFP_KERNEL) where size grows
with the number of fds passed. We had a customer report page all
On 09/27/2016 03:38 AM, Eric Dumazet wrote:
On Mon, 2016-09-26 at 17:01 -0700, Andrew Morton wrote:
I don't share Eric's concerns about performance here. If the vmalloc()
is called, we're about to write to that quite large amount of memory
which we just allocated, and the vmalloc() overhead wi
On 09/27/2016 02:01 AM, Andrew Morton wrote:
On Thu, 22 Sep 2016 18:43:59 +0200 Vlastimil Babka wrote:
The select(2) syscall performs a kmalloc(size, GFP_KERNEL) where size grows
with the number of fds passed. We had a customer report page allocation
failures of order-4 for this allocation
On 09/23/2016 03:35 PM, David Laight wrote:
From: Vlastimil Babka
Sent: 23 September 2016 10:59
...
> I suspect that fdt->max_fds is an upper bound for the highest fd the
> process has open - not the RLIMIT_NOFILE value.
I gathered that the highest fd effectively limits the number
On 09/23/2016 11:42 AM, David Laight wrote:
> From: Vlastimil Babka
>> Sent: 22 September 2016 18:55
> ...
>> So in the case of select() it seems like the memory we need 6 bits per file
>> descriptor, multiplied by the highest possible file descriptor (nfds) as
>
On 09/22/2016 07:07 PM, Eric Dumazet wrote:
On Thu, 2016-09-22 at 18:56 +0200, Vlastimil Babka wrote:
On 09/22/2016 06:49 PM, Eric Dumazet wrote:
> On Thu, 2016-09-22 at 18:43 +0200, Vlastimil Babka wrote:
>> The select(2) syscall performs a kmalloc(size, GFP_KERNEL) where size grow
On 09/22/2016 06:49 PM, Eric Dumazet wrote:
On Thu, 2016-09-22 at 18:43 +0200, Vlastimil Babka wrote:
The select(2) syscall performs a kmalloc(size, GFP_KERNEL) where size grows
with the number of fds passed. We had a customer report page allocation
failures of order-4 for this allocation. This
it doesn't need this kind of fallback.
[eric.duma...@gmail.com: fix failure path logic]
Signed-off-by: Vlastimil Babka
---
fs/select.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/select.c b/fs/select.c
index 8ed9da50896a..b99e98524fde 100644
--- a/fs/se
On 09/22/2016 06:24 PM, Eric Dumazet wrote:
+ bits = kmalloc(alloc_size, GFP_KERNEL|__GFP_NOWARN);
+ if (!bits && alloc_size > PAGE_SIZE) {
+ bits = vmalloc(alloc_size);
+
+ if (!bits)
+ goto ou
it doesn't need this kind of fallback.
Signed-off-by: Vlastimil Babka
---
fs/select.c | 15 +++
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/fs/select.c b/fs/select.c
index 8ed9da50896a..8fe5bddbe99b 100644
--- a/fs/select.c
+++ b/fs/select.c
@@ -29,6 +29
With merge window gone, ping?
On 07/27/2016 11:21 AM, Vlastimil Babka wrote:
> Hi,
>
> I was fuzzing with trinity in qemu via the virtme script, i.e. using
> these opts:
>
> virtme-run --kdir /home/vbabka/wrk/linus.git --rwdir
> /home/vbabka/tmp/trinity/ --dry-run --sho
Hi,
I was fuzzing with trinity in qemu via the virtme script, i.e. using
these opts:
virtme-run --kdir /home/vbabka/wrk/linus.git --rwdir
/home/vbabka/tmp/trinity/ --dry-run --show-command --qemu-opts --smp 8
-drive file=/home/vbabka/tmp/trinity.img,if=ide,format=raw,media=disk
/usr/bin/qemu
On 26.5.2016 21:21, Tejun Heo wrote:
> Hello,
>
> On Thu, May 26, 2016 at 11:19:06AM +0200, Vlastimil Babka wrote:
>>> if (is_atomic) {
>>> margin = 3;
>>>
>>> if (chunk->map_alloc <
>>>
patch fixes the bug by putting most of non-atomic allocations
under pcpu_alloc_mutex to synchronize against pcpu_balance_work which
is responsible for async chunk management including destruction.
Signed-off-by: Tejun Heo
Reported-and-tested-by: Alexei Starovoitov
Reported-by: Vlastimil Babka
Rep
freeing the chunk while the work
item is still in flight.
This patch fixes the bug by rolling async map extension operations
into pcpu_balance_work.
Signed-off-by: Tejun Heo
Reported-and-tested-by: Alexei Starovoitov
Reported-by: Vlastimil Babka
Reported-by: Sasha Levin
Cc: sta...@vger.kernel.
[+CC Marco who reported the CVE, forgot that earlier]
On 05/23/2016 11:35 PM, Tejun Heo wrote:
Hello,
Can you please test whether this patch resolves the issue? While
adding support for atomic allocations, I reduced alloc_mutex covered
region too much.
Thanks.
Ugh, this makes the code even
On 05/23/2016 02:01 PM, Vlastimil Babka wrote:
>> if I read the report correctly it's not about bpf, but rather points to
>> the issue inside percpu logic.
>> First __alloc_percpu_gfp() is called, then the memory is freed with
>> free_percpu() which triggers asy
[+CC Christoph, linux-mm]
On 04/17/2016 07:29 PM, Alexei Starovoitov wrote:
> On Sun, Apr 17, 2016 at 12:58:21PM -0400, Sasha Levin wrote:
>> Hi all,
>>
>> I've hit the following while fuzzing with syzkaller inside a KVM tools guest
>> running the latest -next kernel:
>
> thanks for the report. A
5, 2015 at 1:25 PM, Alexander Duyck
mailto:alexander.du...@gmail.com>> wrote:
On 10/05/2015 06:59 AM, Vlastimil Babka wrote:
On 10/02/2015 12:18 PM, Konstantin Khlebnikov wrote:
When openvswitch tries allocate memory from offline numa node 0:
stats = kmem_cache_alloc_node(flow_stats_cache, GFP_KERNEL
On 10/02/2015 12:18 PM, Konstantin Khlebnikov wrote:
When openvswitch tries allocate memory from offline numa node 0:
stats = kmem_cache_alloc_node(flow_stats_cache, GFP_KERNEL | __GFP_ZERO, 0)
It catches VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES || !node_online(nid))
[ replaced with VM_WARN_ON(!n
On 08/13/2015 04:40 PM, Eric Dumazet wrote:
On Thu, 2015-08-13 at 11:13 +0200, Vlastimil Babka wrote:
Given that this apparently isn't the first case of this localhost issue,
I wonder if network code should just clear skb->pfmemalloc during send
(or maybe just send over localhost). Th
On 08/13/2015 10:58 AM, mho...@kernel.org wrote:
From: Michal Hocko
The patch c48a11c7ad26 ("netvm: propagate page->pfmemalloc to skb")
added the checks for page->pfmemalloc to __skb_fill_page_desc():
if (page->pfmemalloc && !page->mapping)
skb->pfmemalloc = true;
It
On 06/18/2015 04:43 PM, Michal Hocko wrote:
On Thu 18-06-15 07:35:53, Eric Dumazet wrote:
On Thu, Jun 18, 2015 at 7:30 AM, Michal Hocko wrote:
Abusing __GFP_NO_KSWAPD is a wrong way to go IMHO. It is true that the
_current_ implementation of the allocator has this nasty and very subtle
side e
: make the changelog clearer
Cc: Eric Dumazet
Cc: Chris Mason
Cc: Debabrata Banerjee
Signed-off-by: Shaohua Li
Acked-by: Vlastimil Babka
---
net/core/skbuff.c | 2 +-
net/core/sock.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/core/skbuff.c b/net/core/sk
On 06/11/2015 11:28 PM, Debabrata Banerjee wrote:
Resend in plaintext, thanks gmail:
It's somewhat an intractable problem to know if compaction will succeed
without trying it,
There are heuristics, but those cannot be perfect by definition. I think
the worse problem here is the extra latency,
On 06/11/2015 11:35 PM, Debabrata Banerjee wrote:
There is no "background" it doesn't matter if this activity happens
synchronously or asynchronously, unless you're sensitive to the
latency on that single operation. If you're driving all your cpu's and
memory hard then this is work that still tak
61 matches
Mail list logo