Re: [perf-discuss] [shell-discuss] Changing the default buffer sizes for pipes ?

2009-07-10 Thread Bob Friesenhahn
I don't think this is a practical solution. It requires cooperation from all application, i.e. shells. At least the bash authors are adamant 'not to add more specific API crap for a proprietary and dying Solaris'. This is an interesting inflamatory statement to make on a Solaris list. Google

Re: [perf-discuss] Changing the default buffer sizes for pipes ?

2009-07-10 Thread rickey c weisner
You can push a bufmod on the fifo and use messages up to the maximum streams message size. rick On Fri, Jul 10, 2009 at 10:48:29AM -0500, Bob Friesenhahn wrote: > X-Authority-Analysis: v=1.0 c=1 a=CJn96HSh7NQA:10 > a=lzmFrgBnj+XnLHvemvnKkQ==:17 a=RLQEIbXJ:8 a=tx7JrHvR:8 > a=ep_

Re: [perf-discuss] Changing the default buffer sizes for pipes ?

2009-07-10 Thread Bob Friesenhahn
On Fri, 10 Jul 2009, Roland Mainz wrote: Some applications may misbehave or lock-up if the size of the pipe buffer is changed. Erm... why ? You have already noticed that the size is hard-coded in Solaris applications (by PIPE_BUF) since the dawn of time. Pipes offer properties such as ato

Re: [perf-discuss] How to lower sys utlization and reduce xcalls?

2009-07-10 Thread Andrew Gallatin
Steve Sistare wrote: This has the same signature as CR: 6694625 Performance falls off the cliff with large IO sizes http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6694625 which was raised in the network forum thread "expensive pullupmsg in kstrgetmsg()" http://www.opensolaris

Re: [perf-discuss] How to lower sys utlization and reduce xcalls?

2009-07-10 Thread Steve Sistare
This has the same signature as CR: 6694625 Performance falls off the cliff with large IO sizes http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6694625 which was raised in the network forum thread "expensive pullupmsg in kstrgetmsg()" http://www.opensolaris.org/jive/thread.jspa?

[perf-discuss] How to lower sys utlization and reduce xcalls?

2009-07-10 Thread zhihui Chen
We are running a web workload on system with 8 cores, each core has two hyper threads. The workload is network intensive. During test we find that the system is very busy in kernel and has very high cross call rates. Dtrace shows that most xcalls are caused by memory free in kernel. Any suggest