Over last few days I was playing a bit with RAM, forcing my devices to
use only 16MiB of it, observing mem usage, cpu usage, etc. All the
fun.

At some point while using 32MiB RAM bcm47xx device I got OOM:
perf invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0

I was Googling a bit and it appears that perf requested:
2^order = 2^0 = 1 page of memory (4096 B)

However something went wrong, because kernel couldn't provide this memory:
Normal free:4096kB min:4096kB low:5120kB high:6144kB

As you can see, my kernel was configured to reserve 4096kB of memory
(min). As soon as amount of free RAM went below the low (5120kB)
kernel was supposed to reclaim some memory until reaching at least
high (6144kB) of free RAM.
It seems for some reason kernel didn't do that. AFAIU the log, there
was some memory that could be reclaimed:
slab_reclaimable:788kB

Could someone give me a hint, what went wrong, please? Why my kernel
didn't reclaim some memory after reaching "waning" level (below
"low")?

I can't reproduce this issue right now, but I'd love to understand
what has happened.

-- 
Rafał
[  214.790000] perf invoked oom-killer: gfp_mask=0x201da, order=0, 
oom_score_adj=0
[  214.790000] CPU: 0 PID: 887 Comm: perf Tainted: P             3.10.28 #2
[  214.800000] Stack : 00000006 00000000 00000000 00000000 00000000 00000000 
8033d1de 0000003c
          81aa06d8 80277d98 8031d150 802c6f1b 00000377 000201da 00000000 
00000000
          802c6ff0 8001dfd8 04800800 8001b890 00000000 00000000 8027a240 
81675ad4
          81675a00 00000000 00000000 00000000 00000000 00000000 00000000 
00000000
          00000000 00000000 00000000 00000000 00000000 00000000 00000000 
81675a60
          ...
[  214.840000] Call Trace:
[  214.840000] [<80010fb8>] show_stack+0x48/0x70
[  214.850000] [<80068208>] dump_header.isra.16+0x4c/0x138
[  214.850000] [<80068558>] oom_kill_process+0xd4/0x3b0
[  214.860000] [<80068c98>] out_of_memory+0x298/0x2f4
[  214.860000] [<8006bd68>] __alloc_pages_nodemask+0x56c/0x650
[  214.870000] [<8006786c>] filemap_fault+0x2ac/0x408
[  214.870000] [<8007e380>] __do_fault+0xcc/0x4ac
[  214.880000] [<800813cc>] handle_pte_fault+0x330/0x6f4
[  214.880000] [<80081840>] handle_mm_fault+0xb0/0xdc
[  214.890000] [<80016580>] do_page_fault+0x114/0x42c
[  214.890000] [<80001420>] ret_from_exception+0x0/0x24
[  214.900000] 
[  214.900000] Mem-Info:
[  214.900000] Normal per-cpu:
[  214.900000] CPU    0: hi:    0, btch:   1 usd:   0
[  214.910000] active_anon:717 inactive_anon:4 isolated_anon:0
[  214.910000]  active_file:3 inactive_file:47 isolated_file:0
[  214.910000]  unevictable:0 dirty:0 writeback:0 unstable:0
[  214.910000]  free:1024 slab_reclaimable:197 slab_unreclaimable:866
[  214.910000]  mapped:131 shmem:12 pagetables:45 bounce:0
[  214.910000]  free_cma:0
[  214.940000] Normal free:4096kB min:4096kB low:5120kB high:6144kB 
active_anon:2868kB inactive_anon:16kB active_file:12kB inactive_file:188kB 
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32768kB 
managed:29048kB mlocked:0kB dirty:0kB writeback:0kB mapped:524kB shmem:48kB 
slab_reclaimable:788kB slab_unreclaimable:3464kB kernel_stack:224kB 
pagetables:180kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB 
pages_scanned:0 all_unreclaimable? no
[  214.980000] lowmem_reserve[]: 0 0
[  214.990000] Normal: 102*4kB (UE) 119*8kB (UEMR) 73*16kB (UEM) 24*32kB (UM) 
0*64kB 0*128kB 1*256kB (R) 1*512kB (R) 0*1024kB 0*2048kB 0*4096kB = 4064kB
[  215.000000] 72 total pagecache pages
[  215.010000] 0 pages in swap cache
[  215.010000] Swap cache stats: add 0, delete 0, find 0/0
[  215.010000] Free swap  = 0kB
[  215.020000] Total swap = 0kB
[  215.020000] 8192 pages RAM
[  215.030000] 878 pages reserved
[  215.030000] 263886 pages shared
[  215.030000] 5296 pages non-shared
[  215.040000] [ pid ]   uid  tgid total_vm      rss nr_ptes swapents 
oom_score_adj name
[  215.040000] [  258]     0   258      222       16       3        0           
  0 ubusd
[  215.050000] [  259]     0   259      376       19       4        0           
  0 ash
[  215.060000] [  461]     0   461      328       25       4        0           
  0 logd
[  215.070000] [  476]     0   476      374       44       4        0           
  0 netifd
[  215.080000] [  543]     0   543      287       19       3        0           
  0 dropbear
[  215.090000] [  635]     0   635      375       17       5        0           
  0 httpd
[  215.090000] [  648] 65534   648      240       22       4        0           
  0 dnsmasq
[  215.100000] [  862]     0   862     4945      614      10        0           
  0 perf
[  215.110000] [  884]     0   884      376       18       4        0           
  0 ntpd
[  215.120000] Out of memory: Kill process 862 (perf) score 56 or sacrifice 
child
[  215.130000] Killed process 862 (perf) total-vm:19780kB, anon-rss:1932kB, 
file-rss:524kB
_______________________________________________
openwrt-devel mailing list
openwrt-devel@lists.openwrt.org
https://lists.openwrt.org/cgi-bin/mailman/listinfo/openwrt-devel

Reply via email to