Thanks steve and andrew. I have tried following two methods:(1) use mdb to
set mblk_pull_len to 0. The xcall is still very high and same to before
doing that.
(2) set tcp_max_buf to 65536 for control size of send and receive buffer,
the xcall and kernel utilization is reduced very much and "mpstat -a 2" has
following output:

SET minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
sze
  0   76   0 17386 19029  876 1329   99  130  298    2  1719    0   1   0
 99  16
SET minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
sze
  0 8152   0  725 161524 119596 650416 43958 40670 65765    3 356206    7
 52   0  41  16
SET minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
sze
  0 6460   0  804 163550 121536 663085 44498 42610 66349   15 352296    6
 53   0  40  16
SET minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
sze
  0 7560   0  711 168416 123996 667632 46738 46100 67780   13 374021    7
 53   0  40  16


2009/7/10 Andrew Gallatin <galla...@cs.duke.edu>

> Steve Sistare wrote:
>
>> This has the same signature as CR:
>>
>>  6694625 Performance falls off the cliff with large IO sizes
>>  http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6694625
>>
>> which was raised in the network forum thread "expensive pullupmsg in
>> kstrgetmsg()"
>>
>>  http://www.opensolaris.org/jive/thread.jspa?messageID=229124&#229124
>>
>> This was fixed in nv_112, so it just missed OpenSolaris 2009.06.
>>
>> The discussion says that the issue gets worse when the RX socket
>> buffer size is increased, so you could try reducing it as a workaround.
>>
>
> I'm the one who filed that bug..
>
> You can tell if its this particular bug by using mdb to
> set mblk_pull_len to 0 and seeing if that reduces the
> xcalls due to dblk_lastfree_oversize
>
> Cheers,
>
> Drew
>
_______________________________________________
perf-discuss mailing list
perf-discuss@opensolaris.org

Reply via email to