> It looks strange, but for the operating system it isn't - those threads
> (or LWPs, as the man page reads) are waiting for an event or signal, at
> which point they will swap back in.
IIRC, there was a bug about this a while ago. Essentially, the FX and
RT scheduling classes didn't implement CL
G'Day,
On Tue, Jul 01, 2008 at 01:50:45AM -0700, Sebastian Fontaine wrote:
> Hi,
>
> on my 1280 with sol10 u4 last patched on february I had 60 swapping Prozesses
> found with vmstat.
> By restarting some prozesses like apache, rpcbind etc I could eliminate 50%.
> Now I have still some left:
>
> Has anybody an Idea how I could identify the PID of the swapping prozesses?
I'm not sure why there isn't a good way of doing this. Perhaps I've
missed a more obvious approach. I would do this with mdb.
As a priviliged user, do the following:
# mdb -k
> ::walk proc pp | ::print proc_t p_swapc
> > I am looking to improve my boot time, are there
> any
> > services that i can disable that could help? What
> > other things can i do?
> >
> > Thanks,
> > Nick
>
> This has been asked in the past, but either no one is
> skilled on the topic, or the people who are skilled
> have not replied!
If it is swapped out it will have nearly zero pages in memory.
Execute prstat or ps and check the RSS. The PIDs you are interested in
will have near zero RSS.
arwen:alias my_ps
my_ps='ps -e -o user,pid,rss,class,projid,comm'
my_ps | sort -k 3 | less
root 30 SYS 0 fsflush
roo
senthil ramanujam wrote:
> Good tips. I have been testing the performance difference of
> system-calls, library routines, ipc implementations between Solaris
> and Linux. This is one of the test codes I quickly put together to
> show the difference. I'll keep your options on the table and suggest
>
Good tips. I have been testing the performance difference of
system-calls, library routines, ipc implementations between Solaris
and Linux. This is one of the test codes I quickly put together to
show the difference. I'll keep your options on the table and suggest
soon as we get to this point.
man
Hi Dave,
hmmm, I was kind of expecting this answer. Thanks for your analysis.
senthil
On 7/1/08, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Ok, this pretty much verifies what I thought. In the short, by running
> truss you have
> slowed things down enough in the application to significantl
Ok, this pretty much verifies what I thought. In the short, by running
truss you have
slowed things down enough in the application to significantly remove the
lock contention
on the program's mutex. This is one of those performance items that
appears to be contradictory
on the surface.
A bit
Could you please provide lockstat information from plockstat for each of
these
cases. I think I know what you are seeing, but I want data to back up
the theory.
Thanks
Dave Valin
senthil ramanujam wrote:
> Hi,
>
> I am seeing a strange performance behavior with pthread mutex
> operations on
Hi,
I am seeing a strange performance behavior with pthread mutex
operations on my Solaris (4-core) system running S10U5.
The issue is that if the program is run with truss enabled, I am
getting about 3 times more the performance of the run without truss
enabled. Strange...right?
The C program d
On Tue, Jul 01, 2008 at 01:50:45AM -0700, Sebastian Fontaine wrote:
> Hi,
>
> on my 1280 with sol10 u4 last patched on february I had 60 swapping Prozesses
> found with vmstat.
> By restarting some prozesses like apache, rpcbind etc I could eliminate 50%.
> Now I have still some left:
>
> >vmst
Hi,
on my 1280 with sol10 u4 last patched on february I had 60 swapping Prozesses
found with vmstat.
By restarting some prozesses like apache, rpcbind etc I could eliminate 50%.
Now I have still some left:
>vmstat 1 5
kthr memorypagedisk faults cpu
r
13 matches
Mail list logo