Hi Bart,
Can you go into a little detail on the issues with
respect to this?
thanks,
max
> What does intrstat report in terms of interrupts? We've seen some
> issues with some bioses confusing edge triggered vs level triggered
> interrupts.
>
> - Bart
>
Hi,
I can believe that solaris 10 might be slow for some processes
doing forks. Who is "everybody", and how are you measuring time?
Also, are you using the GA of Solaris 10, or a Nevada build?
max
On Mon, 2006-01-23 at 04:32, Roman wrote:
> I'm not running any benchmarks, I
Hi Brendan,
I get the same behaviour you are seeing, at least initially.
Over time, the number of segvn_faults returns to ~129.
I don't use /tmp, as this may also skew your output (because
it uses memory). Maybe the writes of pages in the file system
cache are skewing results...
max
(PS.
the cp.
So, either your disk is very slow or there are other
busy processes on your old system.
max
Quoting Brendan Gregg <[EMAIL PROTECTED]>:
G'Day Max,
On Wed, 11 Jan 2006 [EMAIL PROTECTED] wrote:
Hi Greg,
Upon further reflection (and running your script), I am
very puzzled you
there is no
trapping into the kernel at all until the next 56k
needs to be read in. (I guess I am assuming the hat
layer is setting up pte's as the pages are brought in,
not as cp is accessing them).
What file system is your file in, and what hardware
are you running on?
max
Quoting Brendan
Hi Greg,
Maybe try segvn_faulta() also?
max
Quoting Brendan Gregg <[EMAIL PROTECTED]>:
G'Day Folks,
I'm revisiting segvn activity analysis to see it's hit rate from the page
cache. A while ago (before I had source code access) I tried writing this
from Kstat. Now wit