Simon,
For a 16GB box, the page scanner kicks in when freemem drops below
1/64th of memory, or about 256MB. Doesn't matter if the system is idle
or not.
The 'w' column numbers mean that threads were swapped out at some point
in the past because of a severe memory shortage and never swapped backed
in (because they've not been awoken yet). So it's normal for that column
to stay high even if much of the memory was released.
It looks to me like you're just oversubscribing memory. If you look at
the prstat output I see easily 13-14GB of physical memory in use, plus
you have the kernel memory. As for virtual memory, about 23GB shows up
at least.
Did you check for additional virtual space usage in /tmp?
Are you using ZFS (ARC space needed for that)?
You can also try using the "::memstat" mdb dcmd to break out kernel
memory further.
Jim
Simon wrote:
Hi Experts,
Here's the performance related question,please help to review what can I
do to get the issue fixed ?
IHAC who has one M5000 with Solaris 10 10/08(KJP: 138888-01) installed
and 16GB RAM configured,running sybase ASE 12.5 and JBOSS
application,recently,they felt the OS got very slow after OS running for
some sime,collected vmstat data points out memory shortage,as:
# vmstat 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m4 m5 in sy cs us sy id
0 0 153 6953672 254552 228 228 1843 1218 1687 0 685 3 2 0 0 2334 32431 3143 1 1
97
0 0 153 6953672 259888 115 115 928 917 917 0 264 0 35 0 2 2208 62355 3332 7 3 90
0 0 153 6953672 255688 145 145 1168 1625 1625 0 1482 0 6 1 0 2088 40113 3070 2
1 96
0 0 153 6953640 256144 111 111 894 1371 1624 0 1124 0 6 0 0 2080 55278 3106 3 3
94
0 0 153 6953640 256048 241 241 1935 2585 3035 0 1009 0 18 0 0 2392 40643 3164 2
2 96
0 0 153 6953648 257112 236 235 1916 1710 1710 0 1223 0 7 0 0 2672 62582 3628 3
4 93
As above,the "w" column is very high all time,and "sr" column also kept
very high,which indicates the page scanner is activated and busying for
page out,but the CPU is very idle,checked "/etc/system",found one
improper entry:
set shmsys:shminfo_shmmax = 0xffffffffffff
So I think it's the improper share memory setting to cause too many
physical RAM was reserved by application and suggest to adjustment the
share memory to 8GB(0x200000000),but as customer feedback,seems it got
worst result based on new vmstat output:
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m4 m5 in sy cs us sy id
0 6 762 3941344 515848 18 29 4544 0 0 0 0 4 562 0 1 2448 25687 3623 1 2 97
0 6 762 4235016 749616 66 21 4251 2 2 0 0 0 528 0 0 2508 50540 3733 2 5 93
0 6 762 4428080 889864 106 299 4694 0 0 0 0 1 573 0 7 2741 182274 3907 10 4 86
0 5 762 4136400 664888 19 174 4126 0 0 0 0 6 511 0 0 2968 241186 4417 18 9 73
0 7 762 3454280 193776 103 651 2526 3949 4860 0 121549 11 543 0 5 2808 149820
4164 10 12 78
0 9 762 3160424 186016 61 440 1803 7362 15047 0 189720 12 567 0 5 3101 119895
4125 6 13 81
0 6 762 3647456 403056 44 279 4260 331 331 0 243 10 540 0 3 2552 38374 3847 5 3
92
the "w" & "sr" value increased instead,why ?
And I also attached the "prstat" outout,it's a prstat snapshot after
share memory adjustment,please help to have a look ? what can I do next
to get the issue solved ? what's the possible factors to cause memory
shortage again and again,even they have 16GB RAM + 16GB Swap the physical RAM
really shortage?
Or is there any useful dtrace script to trace the problem ?
Thanks very much !
Best Regards,
Simon
------------------------------------------------------------------------
------------------------------------------------------------------------
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org