Dennis Clarke <dclarke_at_blastwave.org> wrote on
Date: Fri, 29 Nov 2024 03:15:54 UTC :

> On 11/28/24 21:25, Dennis Clarke wrote:
> > 
> > On a machine here I see top reports this with " top -CSITa -s 10"
> > 
> > 
> > last pid:  6680;  load averages:    0.29,    0.12,    0 up 0+11:40:46 
> > 02:23:01
> > 51 processes:  2 running, 47 sleeping, 2 waiting
> > CPU:  0.6% user,  0.0% nice,  0.2% system,  0.0% interrupt, 99.2% idle
> > Mem: 587M Active, 480G Inact, 1332K Laundry, 7410M Wired, 456M Buf, 11G 
> > Free

Notice the "480G Inact" (inactive): that tells me that prior activity has
caused some mix of clean and dirty pages to be loaded into
RAM. The context has no memory pressures to cause freeing any
clean pages or to page out any dirty pages to swap.

It matters not that the machine became idle after loading all
those pages.

> > ARC: 3624M Total, 85M MFU, 3359M MRU, 32M Anon, 118M Header, 27M Other
> >      2919M Compressed, 32G Uncompressed, 11.13:1 Ratio
> > Swap: 32G Total, 32G Free
> > 
> >    THR USERNAME    THR PRI NICE   SIZE    RES STATE    C   TIME     CPU 
> > COMMAND
> > 100003 root         40 187 ki31     0B   640K CPU0     0 464.8H 3967.69% 
> > [idle]
> > 101142 root          1  48    0  1530M   574M piperd  34   0:27  24.69% 
> > /usr/lo
> > 100000 root        731 -16    -     0B    11M parked  18 112:10   3.14% 
> > [kernel
> > 102993 root          1  21    0    30M    15M select  26   0:03   2.77% 
> > /usr/bi
> > 
> > Seems only 11G of memory is free ?
> > 
> > That seems impossible.
> > 
> > titan# sysctl hw.physmem
> > hw.physmem: 549599244288
> > titan#
> > 
> > titan#
> > titan# sysctl -a | grep 'free' | grep 'mem'
> > vm.uma.vmem.stats.frees: 0
> > vm.uma.vmem.keg.domain.1.free_slabs: 0
> > vm.uma.vmem.keg.domain.1.free_items: 0
> > vm.uma.vmem.keg.domain.0.free_slabs: 0
> > vm.uma.vmem.keg.domain.0.free_items: 0
> > vm.uma.vmem_btag.stats.frees: 523236
> > vm.uma.vmem_btag.keg.domain.1.free_slabs: 0
> > vm.uma.vmem_btag.keg.domain.1.free_items: 34398
> > vm.uma.vmem_btag.keg.domain.0.free_slabs: 0
> > vm.uma.vmem_btag.keg.domain.0.free_items: 34378
> > vm.kmem_map_free: 528152154112
> > kstat.zfs.misc.arcstats.memory_free_bytes: 11707904000
> > titan#
> > 
> > I have no idea what "top" is reporting but 11G free on a machine doing 
> > nothing seems ... unlikely.

Seems perfectly normal to me, presuming prior activity
caused the 480G of Inact.

> > 
> > 
> 
> even worse ... under load it seems to make no sense at all :
> 
> 
> last pid: 98884; load averages: 32.01, 30.51, 25 up 0+12:33:20 
> 03:15:35
> 172 processes: 34 running, 136 sleeping, 2 waiting
> CPU: 78.4% user, 0.0% nice, 1.8% system, 0.0% interrupt, 19.8% idle
> Mem: 7531M Active, 450G Inact, 9588K Laundry, 27G Wired, 456M Buf, 14G Free

Notice the 450G of Inact: smaller by roughly 30G. (Still no swap use.)
It make comparison easier . . .

Mem:  587M Active, 480G Inact, 1332K Laundry, 7410M Wired, 456M Buf, 11G Free
Mem: 7531M Active, 450G Inact, 9588K Laundry,  27G Wired,  456M Buf, 14G Free

Mem   increased by around  6.8G or so.
Wired increased by around 19.?G or so.
Free  increased by around  3.?G or so.

That looks to be the majority of the around 30G.

Some Inact clean pages may have been freed that lead to the
increase in Free pages. But there seems to not have been
enough memory pressured to lead to more clean out of Inact.
The system is biased to keep around information in RAM that
it might be able to put to use --unless there is sufficient
competing activity for RAM use (memory pressure).

As for Wired, ARC is stored in Wired and . . .

ARC: 3624M Total
ARC:  17G Total

ARC increased by around 13.?G, making up much of the 19.?G increase in
Wired.

> ARC: 17G Total, 7543M MFU, 4337M MRU, 37M Anon, 260M Header, 5005M Other
> 7207M Compressed, 24G Uncompressed, 3.39:1 Ratio
> Swap: 32G Total, 32G Free
> 
> THR USERNAME THR PRI NICE SIZE RES STATE C TIME CPU 
> COMMAND
> 100003 root 40 187 ki31 0B 640K RUN 0 486.9H 792.70% 
> [idle]
> 103554 root 1 156 i0 786M 632M CPU37 37 0:44 99.82% 
> /usr/bi
> 101148 root 1 156 i0 1317M 822M CPU2 2 2:20 99.82% 
> /usr/bi

I do not understand what about the above indicates any
problem. May be more context about the prior activity
that lead to the above needs to be reported?


===
Mark Millard
marklmi at yahoo.com


Reply via email to