W dniu 2011-01-28 12:30, Damien Fleuriot pisze:

On 1/28/11 11:37 AM, Bartosz Stec wrote:
Guys,

could someone explain me this?

       # sysctl hw.realmem
       hw.realmem: 2139029504

top line shows:

       Mem: 32M Active, 35M Inact, 899M Wired, 8392K Cache,
199M Buf, 58M Free

32+35+899+8+199+58 = 1231MB

Shouldn't that sum to all available ram? Or maybe I'm reading
it wrong?
This machine has indeed 2GB of ram on board and showed in BIOS.
i386  FreeBSD 8.2-PRERELEASE #16: Mon Jan 17 22:28:53 CET 2011
Cheers.
First, don't include 'buf' as isn't a separate set of RAM, it
is only a range
of the virtual address space in the kernel.  It used to be
relevant when the
buffer cache was separate from the VM page cache, but now it is
mostly
irrelevant (arguably it should just be dropped from top output).
Thanks for the explanation. So 1231MB - 199MB Buf and we got
about 1GB
of memory instead of 2B.

However, look at what hw.physmem says (and the realmem and
availmem lines in
dmesg).  realmem is actually not that useful as it is not a
count of the
amount of memory, but the address of the highest memory page
available.  There
can be less memory available than that due to "holes" in the
address space for
PCI memory BARs, etc.

OK, here you go:
# sysctl hw | grep mem

      hw.physmem: 2125893632
      hw.usermem: 1212100608
      hw.realmem: 2139029504
      hw.pci.host_mem_start: 2147483648
Humm, you should still have 2GB of RAM then.  All the memory you
set aside
for ARC should be counted in the 'wired' count, so I'm not sure
why you see
1GB of RAM rather than 2GB.
For what its worth (seems to be the same values top shows), the
sysctl's
I use to make cacti graphs of memory usage are: (Counts are in pages)

vm.stats.vm.v_page_size

vm.stats.vm.v_wire_count
vm.stats.vm.v_active_count
vm.stats.vm.v_inactive_count
vm.stats.vm.v_cache_count
vm.stats.vm.v_free_count

Using the output of those sysctls I allways get a cacti graph
which at
least very much seems to account for all memory, and has a flat
surface
in a stacked graph.
These sysctls are exactly what top uses.  There is also a
'v_page_count'
which is a total count of pages.

So here's additional sysctl output from now:

     fbsd# sysctl hw | grep mem
     hw.physmem: 2125893632
     hw.usermem: 1392594944
     hw.realmem: 2139029504
     hw.pci.host_mem_start: 2147483648

     fbsd# sysctl vm.stats.vm
     vm.stats.vm.v_kthreadpages: 0
     vm.stats.vm.v_rforkpages: 0
     vm.stats.vm.v_vforkpages: 1422927
     vm.stats.vm.v_forkpages: 4606557
     vm.stats.vm.v_kthreads: 40
     vm.stats.vm.v_rforks: 0
     vm.stats.vm.v_vforks: 9917
     vm.stats.vm.v_forks: 30429
     vm.stats.vm.v_interrupt_free_min: 2
     vm.stats.vm.v_pageout_free_min: 34
     vm.stats.vm.v_cache_max: 27506
     vm.stats.vm.v_cache_min: 13753
     vm.stats.vm.v_cache_count: 20312
     vm.stats.vm.v_inactive_count: 18591
     vm.stats.vm.v_inactive_target: 20629
     vm.stats.vm.v_active_count: 1096
     vm.stats.vm.v_wire_count: 179027
     vm.stats.vm.v_free_count: 6193
     vm.stats.vm.v_free_min: 3260
     vm.stats.vm.v_free_target: 13753
     vm.stats.vm.v_free_reserved: 713
     vm.stats.vm.v_page_count: 509752
     vm.stats.vm.v_page_size: 4096
     vm.stats.vm.v_tfree: 196418851
     vm.stats.vm.v_pfree: 2837177
     vm.stats.vm.v_dfree: 0
     vm.stats.vm.v_tcached: 1305893
     vm.stats.vm.v_pdpages: 3527455
     vm.stats.vm.v_pdwakeups: 187
     vm.stats.vm.v_reactivated: 83786
     vm.stats.vm.v_intrans: 3053
     vm.stats.vm.v_vnodepgsout: 134384
     vm.stats.vm.v_vnodepgsin: 29213
     vm.stats.vm.v_vnodeout: 96249
     vm.stats.vm.v_vnodein: 29213
     vm.stats.vm.v_swappgsout: 19730
     vm.stats.vm.v_swappgsin: 8573
     vm.stats.vm.v_swapout: 5287
     vm.stats.vm.v_swapin: 2975
     vm.stats.vm.v_ozfod: 83338
     vm.stats.vm.v_zfod: 2462557
     vm.stats.vm.v_cow_optim: 330
     vm.stats.vm.v_cow_faults: 1239253
     vm.stats.vm.v_vm_faults: 5898471

     fbsd# sysctl vm.vmtotal
     vm.vmtotal:
     System wide totals computed every five seconds: (values in
kilobytes)
     ===============================================
     Processes:              (RUNQ: 1 Disk Wait: 0 Page Wait: 0
Sleep: 60)
     Virtual Memory:         (Total: 4971660K Active: 699312K)
     Real Memory:            (Total: 540776K Active: 29756K)
     Shared Virtual Memory:  (Total: 41148K Active: 19468K)
     Shared Real Memory:     (Total: 4964K Active: 3048K)
     Free Memory Pages:      105308K


     /usr/bin/top line: Mem: 4664K Active, 73M Inact, 700M Wired, 79M
     Cache, 199M Buf, 23M Free
     Sum (Without Buf): 879,5 MB

     So what are we looking at? Wrong sysctls/top output or maybe
     actually FreeBSD doesn't use all available RAM for some reason?
     Could it be hardware problem? Maybe I should provide some
additional
     data?
Does the behaviour become more expected if you remove ZFS from the
picture?  Please try this (yes really).

About an hour ago I had to hard reset this machine because it stopped
responding (bu still gived ping response) after massive slowdown seen
by SAMBA users.
Now top shows following:
Mem: 78M Active, 83M Inact, 639M Wired, 120K Cache, 199M Buf, 1139M Free.

What I am afraid is that this PC slowly eats own memory and finally
starved itself to death, because it happened second time in 2 weeks,
and it seems that rebuilding world+kernel Mon Jan 17 22:28:53 CET 2011
could be the cause. For some strange reason I believe that Jeremy
Chadwick could be right pointing ZFS. Way this machine stops
responding without any info in logs makes me believe that it has
simply lost ability to I/O to HDD (system is ZFS-only).

Day 2 after reboot:
Mem: 100M Active, 415M Inact, 969M Wired, 83M Cache, 199M Buf, 21M Free
Sum: 1588MB
1/4 of total RAM disappeared already.
Anyone knows what possibly happening here or maybe I should hire some
voodoo shaman to expel memory-eating-ghost from the machine ;)?


Can you provide the following sysctls (ignore my values obviously)
again, now that some of your memory magicked itself away ?

hw.physmem: 4243976192
hw.usermem: 3417485312
hw.realmem: 5100273664
vfs.zfs.arc_min: 134217728
vfs.zfs.arc_max: 2147483648


And check out the ZFS ARC stats script here:
http://bitbucket.org/koie/arc_summary/changeset/dbe14d2cf52b/

Run it and see what results you get concerning your ZFS used memory.
What's of interest is the current size of your ZFS ARC cache.
It might account for the memory you're missing, with a bit of luck.
Sure:

   hw.physmem: 2125893632
   hw.usermem: 898928640
   hw.realmem: 2139029504
   vfs.zfs.arc_min: 167772160
   vfs.zfs.arc_max: 1342177280


Actual top stats:

   53M Active, 145M Inact, 1175M Wired, 68M Cache, 199M Buf, 7716K Free

Sum: 1448MB (without Buf)

About 150MB less than 2 hours ago ;)

   # ./arc_summary.sh
   System Memory:
             Physical RAM:  2027 MB
             Free Memory :  7 MB

   ARC Size:
             Current Size:             796 MB (arcsize)
             Target Size (Adaptive):   797 MB (c)
             Min Size (Hard Limit):    160 MB (zfs_arc_min)
             Max Size (Hard Limit):    1280 MB (zfs_arc_max)

   ARC Size Breakdown:
             Most Recently Used Cache Size:          52%    415 MB (p)
             Most Frequently Used Cache Size:        47%    382 MB (c-p)

   ARC Efficency:
             Cache Access Total:             5931999
             Cache Hit Ratio:      89%       5323807        [Defined
   State for buffer]
             Cache Miss Ratio:     10%       608192         [Undefined
   State for Buffer]
             REAL Hit Ratio:       89%       5317666        [MRU/MFU
   Hits Only]

             Data Demand   Efficiency:    95%
             Data Prefetch Efficiency:     1%

            CACHE HITS BY CACHE LIST:
              Anon:                       --%        Counter Rolled.
              Most Recently Used:         39%        2121911
   (mru)          [ Return Customer ]
              Most Frequently Used:       60%        3195755
   (mfu)          [ Frequent Customer ]
              Most Recently Used Ghost:    1%        56946
   (mru_ghost)      [ Return Customer Evicted, Now Back ]
              Most Frequently Used Ghost:  3%        175154
   (mfu_ghost)     [ Frequent Customer Evicted, Now Back ]
            CACHE HITS BY DATA TYPE:
              Demand Data:                21%        1164556
              Prefetch Data:               0%        188
              Demand Metadata:            77%        4151758
              Prefetch Metadata:           0%        7305
            CACHE MISSES BY DATA TYPE:
              Demand Data:                 9%        59296
              Prefetch Data:               2%        15143
              Demand Metadata:            85%        518463
              Prefetch Metadata:           2%        15290
   ---------------------------------------------

Tunables in loader.conf:

   vm.kmem_size="1536M"
   vm.kmem_size_max="1536M"
   vfs.zfs.arc_max="1280M"


It seems that about 579MB is now "missing" while ARC size is 796 MB so it's rather not the case.

--
Bartosz Stec


_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to