On 7/14/2015 02:49, Shane Ambler wrote: > On 14/07/2015 03:18, Karl Denninger wrote: > >> The ARC is supposed to auto-size and use all available free memory. The >> problem is that the VM system and ARC system both make assumptions that >> under certain load patterns fight with one another, and when this >> happens and ARC wins the system gets in trouble FAST. The pattern is >> that the system will start to page RSS out rather than evict ARC, ARC >> will fill the freed space, it pages more RSS out..... you see where this >> winds up heading yes? >> > > Something I noticed was that vfs.zfs.arc_free_target is smaller > than vm.v_free_target > > on my desktop with 8GB I get > vfs.zfs.arc_free_target: 14091 > vm.v_free_target: 43195 > > Doesn't that cause arc allocation to trigger swapping leaving space > for arc allocation.... > Yes and no.
On my system with the patch: vm.v_free_target: 130312 vm.stats.vm.v_free_target: 130312 vfs.zfs.arc_free_target: 86375 and... [karl@NewFS ~]$ pstat -s Device 1K-blocks Used Avail Capacity /dev/mirror/sw.eli 67108860 0 67108860 0% No swapping :-) It's not busy right now, but this is what the system looks like at the moment... 1 users Load 0.22 0.28 0.32 Jul 14 06:57 Mem:KB REAL VIRTUAL VN PAGER SWAP PAGER Tot Share Tot Share Free in out in out Act 2009856 39980 7884504 92820 937732 count All 17499k 52212 8727248 381980 pages Proc: Interrupts r p d s w Csw Trp Sys Int Sof Flt ioflt 2638 total 2 251 1 9264 3332 3982 1134 181 2174 1134 cow 11 uart0 4 830 zfod pcm0 17 0.4%Sys 0.1%Intr 0.8%User 0.0%Nice 98.8%Idle ozfod ehci0 uhci | | | | | | | | | | %ozfod uhci1 21 > daefr 508 uhci3 ehci dtbuf 1612 prcfr 991 cpu0:timer Namei Name-cache Dir-cache 485859 desvn 3105 totfr 139 mps0 256 Calls hits % hits % 161014 numvn react 43 em0:rx 0 7109 7026 99 121460 frevn pdwak 77 em0:tx 0 459 pdpgs em0:link Disks da1 da2 da3 da4 da5 da6 da7 intrn 192 em1:rx 0 KB/t 0.00 11.41 10.84 11.68 11.60 0.00 0.00 21089128 wire 165 em1:tx 0 tps 0 21 24 22 21 0 0 1153712 act em1:link MB/s 0.00 0.23 0.25 0.25 0.24 0.00 0.00 1281556 inact 32 cpu1:timer %busy 0 4 5 5 4 0 0 20372 cache 25 cpu9:timer 916480 free 39 cpu4:timer buf 32 cpu13:time 22 cpu2:timer 33 cpu11:time 28 cpu3:timer 30 cpu14:time 35 cpu5:timer 37 cpu12:time 71 cpu7:timer 26 cpu10:time 26 cpu6:timer 28 cpu8:timer 48 cpu15:time Most of that wired memory is in ARC... ------------------------------------------------------------------------ ZFS Subsystem Report Tue Jul 14 07:00:29 2015 ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Memory Throttle Count: 0 ARC Misc: Deleted: 53.54m Recycle Misses: 15.12m Mutex Misses: 6.63k Evict Skips: 275.51m ARC Size: 75.59% 16.88 GiB Target Size: (Adaptive) 75.73% 16.91 GiB Min Size (Hard Limit): 12.50% 2.79 GiB Max Size (High Water): 8:1 22.33 GiB ARC Size Breakdown: Recently Used Cache Size: 58.52% 9.89 GiB Frequently Used Cache Size: 41.48% 7.01 GiB ARC Hash Breakdown: Elements Max: 1.72m Elements Current: 58.40% 1.00m Collisions: 50.07m Chain Max: 8 Chains: 119.31k ------------------------------------------------------------------------ ARC Efficiency: 2.01b Cache Hit Ratio: 81.50% 1.64b Cache Miss Ratio: 18.50% 371.70m Actual Hit Ratio: 79.46% 1.60b Data Demand Efficiency: 83.00% 1.60b Data Prefetch Efficiency: 15.11% 21.33m CACHE HITS BY CACHE LIST: Anonymously Used: 1.79% 29.34m Most Recently Used: 6.36% 104.08m Most Frequently Used: 91.14% 1.49b Most Recently Used Ghost: 0.09% 1.40m Most Frequently Used Ghost: 0.62% 10.17m CACHE HITS BY DATA TYPE: Demand Data: 81.12% 1.33b Prefetch Data: 0.20% 3.22m Demand Metadata: 16.06% 262.92m Prefetch Metadata: 2.62% 42.89m CACHE MISSES BY DATA TYPE: Demand Data: 73.17% 271.97m Prefetch Data: 4.87% 18.11m Demand Metadata: 17.75% 65.97m Prefetch Metadata: 4.21% 15.65m ------------------------------------------------------------------------ -- Karl Denninger k...@denninger.net <mailto:k...@denninger.net> /The Market Ticker/ /[S/MIME encrypted email preferred]/
smime.p7s
Description: S/MIME Cryptographic Signature