David Lutz wrote:
lwp_park() was called with a timeout period. An exit
with ETIME just means that the timeout expired. It isn't
necessarily an error.
Yeah, it seems these are just expiring connections.
Looking at the problem from a different view, it looks almost like the
e1000g bug 671
Mika Borner wrote:
Darren Reed wrote:
Is any of this driven out of inetd?
No, its a separate process, that spawns LWPs
I have now trussed the irresponsive process and can see:
/3:lwp_park(0xFEFD7DA8, 0)Err#62 ETIME
A couple of minutes later all network interfaces
Darren Reed wrote:
Is any of this driven out of inetd?
No, its a separate process, that spawns LWPs
--Mika
___
perf-discuss mailing list
perf-discuss@opensolaris.org
Darren Reed wrote:
What launches the loginproxy command?
The loginproxy is an IMAP4 proxy.
There is a setting for MaxThreads (currently set to 16384) and
MaxSocketsPerThread Currently 128)
The Documentation mentions for MaxThreads: "The number of available
POP3/IMAP4 server threads. Each
Hi,
on a T2 (8-core) we see irresponsiveness when having a high number of
network connections, even on interfaces that do not have a high payload.
Logins can take ages
intrstat shows that almost all interrupts are on one strand for the
payload interface (e1000g1)
device | cpu5
> It's exactly the problem that Bart already mentioned
> in a previous
> reply. The sparc ide/ata driver is spinning in the
> kernel in ata_wait:
>
> 6421427 netra x1 slagged by NFS over ZFS leading to
> long spins in the ATA driver code
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu
>
I am having the same problem on a Sun Blade 1500 with Sol10U2. I'm creating an
Oracle Instance and the system performance is horrible.
Any ideas? Here is the output (Sorry, attaching files did not work!):
-
Uptime says load average (5min): 13.09,
Lockstat output (truncated):
ba