-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Rallavagu,

On 3/10/16 8:38 PM, Rallavagu wrote:
> 
> 
> On 3/10/16 5:23 PM, Christopher Schultz wrote: Rallavagu,
> 
> On 3/10/16 8:10 PM, Rallavagu wrote:
>>>> 
>>>> 
>>>> On 3/10/16 2:33 PM, Christopher Schultz wrote: Rallavagu,
>>>> 
>>>> On 3/10/16 5:16 PM, Rallavagu wrote:
>>>>>>> On 3/10/16 2:09 PM, Christopher Schultz wrote:
>>>>>>> Rallavagu,
>>>>>>> 
>>>>>>> On 3/10/16 4:02 PM, Rallavagu wrote:
>>>>>>>>>> On 3/10/16 11:54 AM, Christopher Schultz wrote:
>>>>>>>>>>> Are you sure you have matched-up the correct
>>>>>>>>>>> thread within the JVM that is using all that
>>>>>>>>>>> CPU?
>>>>>>>>>> 
>>>>>>>>>>> How are you measuring the CPU usage?
>>>>>>>>>> 
>>>>>>>>>> It would the ID output from "top -H" mapping to
>>>>>>>>>> "Native ID" in thread dump.
>>>>>>> 
>>>>>>> My version of 'top' (Debian Linux) doesn't show thread
>>>>>>> ids. :(
>>>>>>> 
>>>>>>> I seem to recall having to do some backflips to
>>>>>>> convert native thread id to Java thread id. Can you
>>>>>>> explain what you've done to do that?
>>>>>>> 
>>>>>>>> A typical top -H shows the following
>>>>>>> 
>>>>>>>> top - 11:40:11 up 190 days,  1:24,  1 user,  load
>>>>>>>> average: 5.74, 6.09, 5.78 Tasks: 759 total,   4
>>>>>>>> running, 755 sleeping,   0 stopped,   0 zombie
>>>>>>>> Cpu(s): 18.4%us,  1.6%sy, 0.0%ni, 79.5%id, 0.1%wa,
>>>>>>>> 0.0%hi,  0.5%si, 0.0%st Mem: 8057664k total,
>>>>>>>> 7895252k used,   162412k free,    63312k buffers
>>>>>>>> Swap:  2064380k total, 199452k used,  1864928k free,
>>>>>>>> 2125868k cached
>>>>>>> 
>>>>>>>> PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM
>>>>>>>> TIME+ COMMAND 15648 tomcat    20   0 9649m 4.8g 4520
>>>>>>>> R 87.3 62.6 7:24.24 java 21710 tomcat    20   0 9649m
>>>>>>>> 4.8g 4520 R 79.8 62.6 5:44.99 java 21694 tomcat    20
>>>>>>>> 0 9649m 4.8g 4520 S 74.3 62.6 5:39.40 java 7889
>>>>>>>> tomcat    20   0 9649m 4.8g 4520 S 29.7 62.6 4:24.44
>>>>>>>> java 7878 tomcat    20   0 9649m 4.8g 4520 S 27.8
>>>>>>>> 62.6 4:36.82 java 21701 tomcat    20   0 9649m 4.8g
>>>>>>>> 4520 S 26.0 62.6 5:49.83 java
>>>>>>> 
>>>>>>>> After taking thread dump, I used Threadlogic which
>>>>>>>> will show Native-ID as column which corresponds to
>>>>>>>> PID shown above.
>>>>>>> 
>>>>>>>> https://java.net/projects/threadlogic
>>>>>>> 
>>>>>>>> This way it helps to determine the thread that might 
>>>>>>>> potentially causing high cpu.
>>>> 
>>>> Okay. Are you serving a high rate of requests? It's possible
>>>> that the thread is just doing a lot of (legitimate) work.
>>>> 
>>>> The BIO connector is very basic: it uses blocking reads, and
>>>> the thread dump you showed before showed it waiting on IO, so
>>>> it should be completely idle, using no CPU time.
>>>> 
>>>> It's *possible* that it's in a busy-wait state where it is 
>>>> performing a very short IO-wait in a loop that it never
>>>> exits. But since you haven't specified any weird timeouts,
>>>> etc. on your connector, I'm skeptical as to that being the
>>>> cause.
>>>> 
>>>> This thread stays at high CPU usage for quite a while? And
>>>> every thread dump you do has the same (or very similar) stack
>>>> trace?
>>>> 
>>>>> The symptom is that app is always high on CPU hovering
>>>>> between 75 - 85 and so looked at the thread dumps. Each
>>>>> thread dump shows few high cpu threads and some of those
>>>>> are supposedly idle. After looking at a particular thread
>>>>> by Id what it was doing in the previous thread some of
>>>>> these were reading on socket. So, that might be somewhat
>>>>> related to what you said about I/O wait.
>>>> 
>>>>> Also, if BIO is basic, what are other options?
> 
> https://tomcat.apache.org/tomcat-8.0-doc/config/http.html#Standard_Imp
le
>
> 
mentation
> 
> The NIO connector is more scalable, but the BIO should use the
> *least* resources when handling modest loads. I wasn't suggesting
> that BIO should be avoided due to its simplicity... quite the
> contrary, I was suggesting that the BIO connector *should* be
> well-behaved.
> 
>> Sounds good. So, the thread might be simply "winding down" from
>> socket read? If many of those threads behaving the same way
>> pushing CPU to high. Is there anything that I should be looking
>> into or configure perhaps to address this or update to 7.0.68?

I would first upgrade to 7.0.68 and see if the problem goes away. If
not, report back. It would be good to know what kind of load the
server is under -- mostly number of requests per second you are
actually processing, and how many request-processing threads exist for
the Connector (to get an average throughput per thread).

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlbiJKoACgkQ9CaO5/Lv0PA8XQCdELZ8f+rCs+UV7W4sPA819FgV
uuMAnjQamWQ3MnBz2DdJuoL9+GyJC4F0
=x9ql
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to