> While investigating this, we came up with a test scenario that
> consistantly reproduced this behavior. The behavior is that if you
> have a system with 4gb of memory, and create a 1gb file in /tmp, and a
> 1gb file in /var/tmp, and then you start 2 processes each with an rss
> of about 1gb, your
Peter:
Would you describe your swap configuration? The output from df -hlk and
swap -l would be helpful.
Thanks,
-j
On Wed, Aug 15, 2007 at 08:17:05AM -0700, Peter C. Norton wrote:
> We use solaris 10 at my company, but I noticed this behavior is the
> same/worse on sxde, and I wanted to know i
Hi Alex:
These leaks in tar aren't enough to consume 1GB of memory. When
applications leak memory, that memory will be returned to the system
once the application exits.
What are you using to measure the amount of memory consumed by your
system? Can you explain why you think it is leaking memor
Eric,
You're one of the leaders of the performance community, so don't forget
to vote for yourself. I think this would be great for the performance
community. I'm in favor of this project. (+1)
-j
On Tue, Jun 05, 2007 at 01:28:50PM -0700, Eric Saxe wrote:
>
> I'd like the ask the OpenSolaris p
> My idea was to provide a simple, "preconfigured" tool to do this.
Without writing any new code, it seems that we already have a tool for
doing this. The manpage for priocntl(1) is pretty explicit:
In addition to the system-wide limits on user priority
(displayed with priocnt
I'm not sure that I understand why we need to introduce new code for
this kind of functionality. Why not use the FX class and assign your
batch processes priority 0 and a longer time quantum? priocntl(1)
explains the details about how one might accomplish this.
-j
On Wed, May 30, 2007 at 11:57:
Another place to start might be with Brendan Gregg's DTrace tools:
http://www.brendangregg.com/dtrace.html
His prustat, hotuser, hotkernel, and shortlived.d scripts might be
helpful in your situation.
-j
On Mon, May 21, 2007 at 01:50:27PM -0700, Eric Saxe wrote:
> Jeffrey Collyer wrote:
> Be careful with %w... it's not that accurate. If you upgrade your
> e20k to Solaris 10, you'll lose that as iowait is no longer
> calculated (although the %w column is still there for output
> compatibility reasons. %b (% busy) is what you should be looking at
> instead.
That's not entir
You might consider posting this question to install-discuss. Other
people have also complained about the RAM requirement. I don't beleive
the installer group monitors the performance list.
-j
On Fri, Apr 20, 2007 at 06:25:16AM -0700, Joseph villa wrote:
> I just got my cd for the express editio
Ganesh:
> Next when the client processes comes up, they attach to all these shms
> created by server process. When the client process calls "shmat" on
> the first 3 memory segments, shmat returns the same address which it
> returned to server process (these addresses are stored in 1st shm).
> But
I second Bart's endorsement, if it matters.
-j
On Thu, Mar 22, 2007 at 12:59:14PM -0700, Bart Smaalders wrote:
> Eric Saxe wrote:
> >I'd like to request endorsement from the performance community for the
> >"Enable/enhance Solaris support for Intel Project".
> >http://www.opensolaris.org/os/proj
> Thanks! As long as I can trust (more or less) the numbers for the
> USR/SYS and avoid the dampening effect of the regular prstat - I am
> happy.
I'm not sure I understand your comments about regular prstat. Would you
be kind enough to describe the dampening effect you've mentioned? I'd
be inte
> Perfect - this answers my question! No, I don't want to imply that the
> mechanism is broken - as long as I understand how to read the data and
> how to interpret the discrepancies.
With this problem, the data is difficult to interpret on Solaris 9
machines. If you were to request a bug to be f
> OK, I think I got it, please correct if I am wrong:
>
> Suppose I do [b]prstat -m 10[/b], suppose a given lwp is in state X at
> the beginning and transitions to state Y 4 seconds after the beginning
> of the monitoring sample and back to X 1 sec later.
So, in this scenario the time that your
> This makes sense, however, I am still a bit confused. You are stating
> that in Solaris 9 microstate accounting gets only updated when the lwp
> is transitions from one state to another.
No, microstate data only gets updated when the lwp transitions from one
state to another. This hasn't change
Eugene:
I think I understand why your microstate values aren't adding up to
100%. The fact that you're running Solaris 9 has a lot to do with the
problem.
Microstate accounting only updates its timestamps when a lwp transitions
from one state to another. So, in your case, your lwp has been idle
What version of Solaris are you running? Any additional details you
could provide about your configuration and software would be helpful.
-j
On Fri, Feb 09, 2007 at 01:05:39PM -0800, Eugene Margulis wrote:
> I understand that prstat -m shows microstate wallclock utilization that
> should add up
> So let's say my CPUs are 1200MHz, does it make each increment
> 1/1,200,000 second?
1 MHz is 1,000,000 Hz. So if your CPU is 1200 MHz, that's 1,200,000,000
Hz. With this in mind, the increment would actually be 1/1,200,000,000
of a second.
> And if I have 600,000 interrupt 6 increments in one
> What exactly does "count of CPU cycles" mean here? CPU time? in nano
> seconds? number of interrupts?
Wikipedia has a relatively concise explanation of CPU clock rate here:
http://en.wikipedia.org/wiki/CPU_clock
In intrstat's case, the CPU cycles are read from an on-chip register
that advances
These numbers for level-XX are a count of CPU cycles spent in each
interrupt level.
-j
On Thu, Jan 04, 2007 at 12:05:19PM -0800, Sean Liu wrote:
> This question is for Solaris 9 - I understand only Solaris 10 has intrstat,
> but there is also intrstat provider for kstat in solaris 9:
> #kstat -n
Konstantin:
> This is single static RW lock which protects array of pointers to
> data structures. this array slowly growing to size depended
> of specific installation. growing with pretty big size increase at once.
> say, on W this RW is locked once in a hour. a lot of threads,
> which consume s
Konstantin:
> write locks can be disregarded here. its only read locks which cannot
> be executed in parallel. nobody blames os and/or hardware.
> Im trying to understand is it expected behavior or not.
I'm not sure I understand why you're saying that it's okay to disregard
write locks.
Are you
Konstantin:
Roch Bourbonnais has a blog entry about some of the less obvious
performance aspects of rwlocks:
http://blogs.sun.com/roch/entry/beware_of_the_performance_of
It may have information that is useful to your particular situation.
> most of the time rw lock is read locked by several thre
> Yes I think this is something I'm looking for, but I'm not sure if I can
> use your Cap-Eye kernel image because my code changes are in kernel
> modules such as ip, sockfs and genunix. Nevertheless, I'll be happy to
> try it.
Cap-eye Install can handle changes to kernel modules. However, when y
There isn't any one answer to this question. Do you intend to use this
T2000 specific compilation option when you compile the applications that
you actually intend to run on this platform? What does the option do?
Usually it is desireable to use architecture specific optimizations in
benchmarks
Adrian,
I agree. This is a good opportunity to discuss performance measurement
issues.
However, you asked if I would fix your particular issues while fixing
the bug that Mike found. I don't want to mislead anybody, so my
response was simply that I don't presently have the time to address
these a
Mike,
> As "meaningless" as iowait is/was, there was significant value once we
> got beyond "the cpu is busy in iowait... add more boards!" Getting to
> that point paid a couple of semesters of tuition for my salesman's
> daughters.
Iowait is an overloaded term in the Solaris kernel, so the conf
Adrian,
I agree that these metrics would be useful and nice to have; however,
their implementation is outside of the scope of what I intend to
address with this fix.
I'm simply fixing the functional regression in the kstats. I can't, in
good faith, make improvements to microstate accounting and
Adrian,
Perhaps I wasn't as clear as I should have been. I'm not going
to make any change that accounts for the amount of time a thread spends
waiting for I/O. Rather, I want to re-expose a statistic which, on a
per-CPU basis, lists the number of threads blocked in biowait().
-j
On Wed, Oct 18
Adrian,
biowait() still updates cpu_stats.sys.iowait. This means that Solaris
is still keeping track of the number of threads that are blocked waiting
for I/O. I see no reason not to expose this value to the user.
I certainly do not intend to re-introduce a CPU percentage of time
waiting for I/O,
Mike,
> On a related note, I have yet to see vmstat's "b" (kernel threads
> blocked on I/O) column be non-zero. This includes a large RAC
> environment where the previous measure of I/O health was (don't shoot
> the messenger!) iowait. The same workload on S9 consistently showed
> non-zero valu
Disabling the ZIL will cause synchronous writes to occur
asynchronously. If any applications depend upon the use of synchronous
writes for their correctness, they're likely to be adversely affected.
There are a number of ZIL performance fixes that have been checked in
recently, or that are in the
You might consider cross-posting this question to zfs-discuss. I've
been lead to believe that there are some known issues with ZFS/NFS
performance. However, I don't know to what degree they would affect
your configuration.
I apologize for the relatively un-helpful response.
-j
On Sun, Oct 01,
33 matches
Mail list logo