yes, the issue is known. However, we have not been able to create a use
case for it, since I've never been able to reproduce it.
One of the work arounds would be to close the selector, but that is a
royal pain, since you'd then have to reregister all keys and you'd end
up in a synchronization nightmare.
Filip
On 01/13/2010 07:57 AM, Tobias Lind wrote:
Hi!
We've been using Tomcat on Linux for a very long time (and the good old
JServe before it), and we recently started testing the NIO-connector instead
of the old blocking one. We are currently running the latest Tomcat v6.0.20.
We have a pretty large website with quite a lot of traffic, and switching to
the NIO-connector gives us a VERY good performance boost! We also got rid of
problems with hanging connections, etc, so it was very promising.
But it also gave us new headaches :/
We were using IBM's JDK 6.0_7 (the latest), and on the first testing on our
production server, the CPU hit 100% (everything started and the site worked
though).
We installed Sun's JDK 1.6.0_17 instead, and the CPU was constantly running
at ~20-30% even when the traffic to the site was quite low. In about 24
hours runtime, we also saw one occasion where the CPU went up to 100% and
never came down again (while no clients where actually running our server),
and it took a Tomcat-restart to get it "down" to 30% again.
I started investigating, and found quite a lot of reports on problem with
NIO and the Selector looping out of control.
Hera are some links to pages about this problem:
http://bugs.sun.com/view_bug.do?bug_id=6403933
http://bugs.sun.com/view_bug.do?bug_id=6525190
http://forums.sun.com/thread.jspa?threadID=5135128
http://issues.apache.org/jira/browse/DIRMINA-678
A thread-dump showed that it's very likely to be this problem we are seeing.
These threads are taking much more CPU than expected (although on Sun's JDK
it seems a bit better than on IBM's), and when the system load jumped to
100%, it was the "http-80-ClientPoller-0" that behaving badly:
"http-80-Acceptor-0" daemon prio=10 tid=0x0828d400 nid=0x7308 runnable
[0x4df19000]
java.lang.Thread.State: RUNNABLE
at
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:145)
- locked<0x547f84c8> (a java.lang.Object)
at
org.apache.tomcat.util.net.NioEndpoint$Acceptor.run(NioEndpoint.java:1266)
at java.lang.Thread.run(Thread.java:619)
"http-80-ClientPoller-1" daemon prio=10 tid=0x0825f400 nid=0x7307 runnable
[0x4df6a000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.PollArrayWrapper.poll0(Native
Method)
at
sun.nio.ch.PollArrayWrapper.poll(PollArrayWrapper.java:100)
at
sun.nio.ch.PollSelectorImpl.doSelect(PollSelectorImpl.java:56)
at
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked<0x54941568> (a sun.nio.ch.Util$1)
- locked<0x54941558> (a
java.util.Collections$UnmodifiableSet)
- locked<0x54941410> (a
sun.nio.ch.PollSelectorImpl)
at
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1545)
at java.lang.Thread.run(Thread.java:619)
"http-80-ClientPoller-0" daemon prio=10 tid=0x0831b400 nid=0x7306 runnable
[0x4dfbb000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.PollArrayWrapper.poll0(Native
Method)
at
sun.nio.ch.PollArrayWrapper.poll(PollArrayWrapper.java:100)
at
sun.nio.ch.PollSelectorImpl.doSelect(PollSelectorImpl.java:56)
at
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked<0x54941758> (a sun.nio.ch.Util$1)
- locked<0x54941748> (a
java.util.Collections$UnmodifiableSet)
- locked<0x54941610> (a
sun.nio.ch.PollSelectorImpl)
at
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:1545)
at java.lang.Thread.run(Thread.java:619)
I'm sure this issue is well known to the Tomcat community and that it has
been discussed before, but I'd just like to know the current status on the
issue.
The webpages i referred to above indicates that there are some workarounds
to this problem - are these workarounds implemented in Tomcat?
Is there anything we can do to get it running?
We'd REALLY like to use this connector as it's performing a lot better.
Even though the CPU-load is a lot higher, the clients on the site is served
a lot better it seems.
So running with 30-40% CPU could actually be ok for now, but when it also
jumps up to 100% and stays there, it's not possible to use...
We are running on quite an old Linux system with dual CPUs:
Linux www.kamrat.com 2.4.21-27.0.2.ELsmp #1 SMP Wed Jan 19 01:53:23 GMT 2005
i686 i686 i386 GNU/Linux
The issue seems to depend on the kernel, and there are reports that the
issue has been fixed for 2.6+ kernels in later JDKs, while others say that
the issue is still there (it also says that for the 2.4-kernels, the fix is
more complicated, and not yet implemented). I'm also thinking that a
slightly higher CPU-load with NIO may be normal on the 2.4-kernel because of
the polling-mechanism, but this seems a bit TOO high I think. And jumping to
100% is certainly not normal...
Does anyone know the status of this issue and how Tomcat is dealing with it?
Is there anything we can do/try?
Are anyone of you using the NIO-connector on Tomcat 6.0.20 with the latest
JDK on Linux on a production site?
Are you seeing any of these issues?
If not, what kernel are you running?
We'd like to figure out if upgrading to a newer kernel would help us...
But as it's our production machine, we would like to mess with it as little
as possible :)
Here are our Connector-config:
<Connector port="80"
protocol="org.apache.coyote.http11.Http11NioProtocol"
maxHttpHeaderSize="8192"
maxThreads="1000"
enableLookups="false"
redirectPort="8443"
acceptCount="100"
maxPostSize="4194304"
connectionTimeout="10000"
timeout="120000"
disableUploadTimeout="false"
pollerThreadCount="2"
acceptorThreadCount="1"
pollTime="4000"
processCache="600"
socket.processorCache="600"
socket.keyCache="600"
socket.eventCache="600"
socket.tcpNoDelay="true"
socket.soTimeout="10000"
/>
(I've been fiddling around a bit with the NIO-specific parameters, but I
haven't seen any significant change in CPU-load)
The startup-options are quite normal:
export CATALINA_OPTS="-Dfile.encoding=ISO-8859-1"
export JAVA_OPTS="-Xms512m -Xmx1536m -server -Dfile.encoding=ISO-8859-1"
There's nothing in the catalina-logs and the site is actually running quite
well despite the high CPU-load.
I would be very thankful if anyone has any hints or info that may help us!
Regards,
Tobias Lind, Sweden
p.s. We also tried the ARP-connector and it also made the CPU running at
100%+
I didn't test it very thoroughly though...
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org