This has the same signature as CR:
6694625 Performance falls off the cliff with large IO sizes
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6694625
which was raised in the network forum thread "expensive pullupmsg in
kstrgetmsg()"
http://www.opensolaris.org/jive/thread.jspa?messageID=229124𷼄
This was fixed in nv_112, so it just missed OpenSolaris 2009.06.
The discussion says that the issue gets worse when the RX socket
buffer size is increased, so you could try reducing it as a workaround.
- Steve
On 07/10/09 08:13, zhihui Chen wrote:
We are running a web workload on system with 8 cores, each core has
two hyper threads. The workload is network intensive. During test we
find that the system is very busy in kernel and has very high cross call
rates. Dtrace shows that most xcalls are caused by memory free in
kernel. Any suggestion to lower sys utilization and reduce xcalls?
"mpstat -a 2" output:
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 805 14 322274 348634 17344 50496 7537 3396 14842 185 59338 2
18 0 80 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 11027 1 2748955 2864793 89324 148049 30088 5111 123434 22 281361
7 92 0 0 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 10304 0 2772163 2890354 90325 155787 31692 5456 108526 12 262906
6 94 0 0 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 18048 0 2776165 2891182 88630 152517 29913 5077 100318 19 247422
6 94 0 0 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 11260 0 2782578 2902024 91238 160978 31941 5604 125821 24 279300
6 94 0 0 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 7861 0 2733062 2864302 96346 193671 38932 6320 168122 12 313714
7 93 0 0 16
SET minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl sze
0 7633 0 2783807 2902558 90803 166278 32752 6122 126578 10 273982
6 94 0 0 16
"nicstat 2" output:
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util
Sat
05:01:55 lo0 0.00 0.00 2179.7 2179.7 0.00 0.00 0.00
0.00
05:01:55 igb0 0.06 0.33 1.00 1.00 64.00 338.0 0.00
0.00
05:01:55 igb1 3825.5 686.5 4424.8 4554.2 885.3 154.4 3.70
0.00
05:01:55 ixgbe2 12399.1 446438 145339 340749 87.36 1341.6 37.6
0.00
05:01:55 ixgbe4 10792.8 434197 128687 332963 85.88 1335.3 36.5
0.00
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util
Sat
05:01:57 lo0 0.00 0.00 2258.3 2258.3 0.00 0.00 0.00
0.00
05:01:57 igb0 0.06 0.29 1.00 1.00 64.00 298.0 0.00
0.00
05:01:57 igb1 3830.8 690.9 4497.1 4588.1 872.3 154.2 3.70
0.00
05:01:57 ixgbe2 12596.9 438299 148012 334978 87.15 1339.8 36.9
0.00
05:01:57 ixgbe4 11330.4 459088 136182 349905 85.20 1343.5 38.5
46.00
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util
Sat
05:01:59 lo0 0.00 0.00 2168.9 2168.9 0.00 0.00 0.00
0.00
05:01:59 igb0 0.06 0.29 1.00 1.00 64.00 298.0 0.00
0.00
05:01:59 igb1 3805.1 690.9 4472.3 4590.2 871.2 154.1 3.68
0.00
05:01:59 ixgbe2 12303.5 422444 144187 324561 87.38 1332.8 35.6
0.00
05:01:59 ixgbe4 10944.4 454153 129900 349436 86.27 1330.9 38.1
4.50
"vmstat 2" output
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr s2 s3 s4 s5 in sy cs us
sy id
14 0 0 36683132 29407588 1101 5602 559 1 1 0 3487 35 -0 1171 -0 919649
98705 90237 2 30 67
79 0 0 31365632 23477712 1 9242 0 0 0 0 0 0 0 2169 0 3027764 159718
68502 3 97 0
80 0 0 31293276 23373552 0 5452 0 0 0 0 0 0 0 2291 0 3020344 172791
73594 3 97 0
55 0 0 31067468 23217816 0 5946 0 0 0 0 0 0 0 2164 0 3029687 201127
87720 3 97 0
86 0 0 30961960 23115928 0 5676 0 0 0 0 0 0 0 2253 0 3025300 154396
71515 3 97 0
56 0 0 30798796 22959692 0 6597 0 0 0 0 0 0 0 2340 0 3026345 161389
56838 3 96 0
81 0 0 30796128 22963332 0 6638 0 0 0 0 0 0 0 2396 0 3025128 172106
68770 3 97 0
dtrace -n 'xc_common:xcal...@[stack()]=count();}' -c "sleep 10" output:
.....
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_free+0x44
genunix`dblk_lastfree_oversize+0x69
genunix`dblk_decref+0x64
genunix`freemsg+0x84
ixgbe`ixgbe_free_tcb+0x43
ixgbe`ixgbe_tx_recycle_head_wb+0x1d4
ixgbe`ixgbe_intr_rx_tx+0x101
unix`av_dispatch_autovect+0x7c
unix`dispatch_hardint+0x33
29517
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_free+0x44
genunix`dblk_lastfree_oversize+0x69
genunix`dblk_decref+0x64
genunix`freeb+0x80
ip`tcp_rput_data+0x24ac
ip`squeue_drain+0x179
ip`squeue_enter+0x3f4
ip`tcp_sendmsg+0xfb
sockfs`so_sendmsg+0x1c7
45907
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_zio_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_slab_destroy+0x87
genunix`kmem_slab_free+0x2bb
genunix`kmem_magazine_destroy+0x39a
genunix`kmem_depot_ws_reap+0x65
genunix`taskq_thread+0x193
unix`thread_start+0x8
541781
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_free+0x44
genunix`dblk_lastfree_oversize+0x69
genunix`dblk_decref+0x64
genunix`freeb+0x80
ip`tcp_rput_data+0x24ac
ip`squeue_drain+0x179
ip`squeue_enter+0x3f4
ip`ip_input+0xa8e
mac`mac_rx_soft_ring_drain+0xdf
1118445
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_free+0x44
genunix`dblk_lastfree_oversize+0x69
genunix`dblk_decref+0x64
genunix`freeb+0x80
ip`tcp_rput_data+0x24ac
ip`squeue_drain+0x179
ip`squeue_polling_thread+0x1dd
unix`thread_start+0x8
1919538
unix`xc_do_call+0x135
unix`xc_call+0x4b
unix`hat_tlb_inval+0x2af
unix`x86pte_inval+0xaa
unix`hat_pte_unmap+0xfd
unix`hat_unload_callback+0x193
unix`hat_unload+0x41
unix`segkmem_free_vn+0x6f
unix`segkmem_free+0x27
genunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_free+0x44
genunix`dblk_lastfree_oversize+0x69
genunix`dblk_decref+0x64
genunix`freeb+0x80
ip`tcp_rput_data+0x24ac
ip`squeue_drain+0x179
ip`squeue_enter+0x3f4
ip`ip_input+0xc17
mac`mac_rx_soft_ring_drain+0xdf
3678526
Thanks
Zhihui
------------------------------------------------------------------------
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org