Some supplements:
On Wed, Feb 24, 2010 at 2:43 PM, Li, Aubrey wrote:
> Jonathan Chew wrote:
>>
>>
>>Can you please explain what you mean by CPU, memory, and I/O sensitive?
>
> - CPU sensitive application can be identified by CPU utilization. High CPU
> utilization means the application is CPU sen
On Wed, Jan 20, 2010 at 2:46 AM, Krishnendu Sadhukhan
wrote:
>>
>> > PEBS is important in our NUMAtop design to measure
>> memory access
>> > latency for application. We will enhance kcpc to
>> support PEBS and
>> > uncore events. If we can talk with CPC engineers
>> and get their help,
>> > it wi
On Mon, Jan 18, 2010 at 5:31 PM, Jon Haslam wrote:
> Hi Aubrey,
>
>> NUMAtop will focus on NUMA-related characteristic. Yes, the information is
>> collected from memory-related hardware counters. Some of these counters
>> are
>> already supported in kcpc and libcpc, while some of them are not. We
On Wed, Jan 6, 2010 at 5:53 AM, wrote:
> On Tue, Jan 05, 2010 at 04:27:03PM +0800, Li, Aubrey wrote:
>> >I'm concerned that unless we're able to demonstrate some causal
>> >relationship between RMA and reduced performance, it will be hard for
>> >customers to use the tools to diagnose problems.
ds. They should be reevaluated* *periodically, especially if
allocations larger than DBLK_MAX_CACHE* *ome common. We use 64-byte
alignment so that dblks don't* *straddle cache lines unnecessarily.*
*
*
Thanks
Zhihui
2009/7/13 zhihui Chen
> Tried more different setting for TCP parameters, find follo
st
modify tcp parameter through ndd. From the application, it will try to set
socket send buffer size to 1048576 through setsockopt. When tcp_max_buf is
set to <1048576, setsockopt call will fail and report error, but it
continues to run.
Thanks
Zhihui
2009/7/13 zhihui Chen
> Thanks stev
Thanks steve and andrew. I have tried following two methods:(1) use mdb to
set mblk_pull_len to 0. The xcall is still very high and same to before
doing that.
(2) set tcp_max_buf to 65536 for control size of send and receive buffer,
the xcall and kernel utilization is reduced very much and "mpstat
We are running a web workload on system with 8 cores, each core has two
hyper threads. The workload is network intensive. During test we find that
the system is very busy in kernel and has very high cross call rates. Dtrace
shows that most xcalls are caused by memory free in kernel. Any suggest
I set filesize to 2048m for singstreamwrite and multistreamwrite, which I
think that filebench will generat 2048m-size file. But during filebench
running, I find size of target file is increaing consistently until ending
of filebench, which leads to 30GB+ target file. Running time for these two
wo