On 10/31/16 12:54 PM, Mark Thomas wrote:
On 31/10/2016 19:52, Rallavagu wrote:
As per https://tomcat.apache.org/tomcat-7.0-doc/config/executor.html it
appears "maxIdleTime" could be what I am looking for. Is there a
corresponding config parameter for internal thread pool?
Whi
As per https://tomcat.apache.org/tomcat-7.0-doc/config/executor.html it
appears "maxIdleTime" could be what I am looking for. Is there a
corresponding config parameter for internal thread pool?
On 10/31/16 12:44 PM, Mark Thomas wrote:
On 31/10/2016 19:38, Rallavagu wrote:
H
On 10/31/16 12:44 PM, Mark Thomas wrote:
On 31/10/2016 19:38, Rallavagu wrote:
Here is the configuration snippet.
As you can see, _not_ using executor. Thanks.
When using the internal thread pool, that delay is hard-coded to 60s. If
you want to configure it, use an
Here is the configuration snippet.
As you can see, _not_ using executor. Thanks.
On 10/31/16 11:58 AM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Rallavagu,
On 10/31/16 1:59 PM, Rallavagu wrote:
All,
Tomcat 7.0.70 with bio connector
I
All,
Tomcat 7.0.70 with bio connector
I have been monitoring JMX for "currentThreadCount" and
"currentThreadsBusy". If I understand correctly, currentThreadCount
would not exceed "maxThreads" configuration and "currentThreadsBusy"
apparently shows "busy" threads at that time. I am wondering i
Tomcat 7.0.70, JDK 1.7.0_80
$java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
There are many threads BLOCKED as below and due to this these threads
are very slow. Anybody else experienced th
On 10/14/16 1:01 AM, Mark Thomas wrote:
On 12/10/2016 16:22, Rallavagu wrote:
Tomcat 7.0.70 - Sun JDK 1.7.0_80
I have following long running thread (almost 5 sec).
No you don't. That thread isn't doing it any work. It is blocking,
waiting for I/O.
Well, I got in impression b
Tomcat 7.0.70 - Sun JDK 1.7.0_80
I have following long running thread (almost 5 sec). It appears to be
reading data from socket (external resource potentially). Wonder how I
could go about debug these kind of threads to understand which external
resource is it spending more time on reading. Th
One thing I would check is if it is Tomcat that is sending it or an
intermediary load balancer.
On 8/31/16 8:50 AM, john.e.gr...@wellsfargo.com wrote:
All,
I'm using Tomcat 7.0.70 and am having trouble understanding why Tomcat is sending "Connection:
close" in the response header as often as
Tomcat 7.0.47 running on Linux
I have started investigating after noticing following messages from
"dmesg" output on a production server.
"possible SYN flooding on port 28080. Sending cookies."
Started looking into this as the connections to this server are timing
out (Connect Timeout error
instance be active?
On 4/4/16 8:03 PM, Christopher Schultz wrote:
Rallavagu,
On 4/4/16 8:13 PM, Rallavagu wrote:
Tomcat 7.0.47, JDK 7
When an app is hot deployed in-place by simply copying the .war file
into webapps directory, the old webappclassloader is not cleared
completely because of
Tomcat 7.0.47, JDK 7
When an app is hot deployed in-place by simply copying the .war file
into webapps directory, the old webappclassloader is not cleared
completely because of inefficient context shutdown from the app. In this
case, two instances of WebAppClassLoader are running for the same
On 3/30/16 10:25 AM, Rallavagu wrote:
On 3/30/16 9:55 AM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/30/16 11:54 AM, Rallavagu wrote:
Tomcat 7.0.47, JDK 7
I have following long running socketwrite thread (more than 10
sec). Wondering what
On 3/30/16 9:55 AM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/30/16 11:54 AM, Rallavagu wrote:
Tomcat 7.0.47, JDK 7
I have following long running socketwrite thread (more than 10
sec). Wondering what could cause this so I can further look and
Tomcat 7.0.47, JDK 7
I have following long running socketwrite thread (more than 10 sec).
Wondering what could cause this so I can further look and investigate.
"http-bio-28080-exec-1497" daemon prio=10 tid=0x7f812c230800
nid=0x72fa runnable [0x7f80010f9000]
java.lang.Thread.State
On 3/10/16 5:23 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/10/16 8:10 PM, Rallavagu wrote:
On 3/10/16 2:33 PM, Christopher Schultz wrote: Rallavagu,
On 3/10/16 5:16 PM, Rallavagu wrote:
On 3/10/16 2:09 PM, Christopher Schultz wrote
On 3/10/16 2:33 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/10/16 5:16 PM, Rallavagu wrote:
On 3/10/16 2:09 PM, Christopher Schultz wrote: Rallavagu,
On 3/10/16 4:02 PM, Rallavagu wrote:
On 3/10/16 11:54 AM, Christopher Schultz wrote:
Are
On 3/10/16 2:09 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/10/16 4:02 PM, Rallavagu wrote:
On 3/10/16 11:54 AM, Christopher Schultz wrote:
Are you sure you have matched-up the correct thread within the
JVM that is using all that CPU?
How
On 3/10/16 1:02 PM, Konstantin Kolinko wrote:
2016-03-10 22:54 GMT+03:00 Christopher Schultz :
Rallavagu,
On 3/10/16 2:11 PM, Rallavagu wrote:
From a thread dump and corresponding "top" output it is reported
that the following thread is consuming significant CPU (around
80%)
On 3/10/16 11:54 AM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Rallavagu,
On 3/10/16 2:11 PM, Rallavagu wrote:
From a thread dump and corresponding "top" output it is reported
that the following thread is consuming significant CPU (around
80%)
All,
From a thread dump and corresponding "top" output it is reported that
the following thread is consuming significant CPU (around 80%)
"http-bio-28080-exec-437" daemon prio=10 tid=0x7f4acc0de000
nid=0x54ce waiting on condition [0x7f4b038f7000]
java.lang.Thread.State: TIMED_WAIT
On 1/21/16 4:13 AM, Christopher Schultz wrote:
Rallavagu,
On 1/19/16 6:14 PM, Rallavagu wrote:
On 1/19/16 2:43 PM, Mark Thomas wrote:
On 19/01/2016 22:36, Rallavagu wrote:
Also, it could be keep-alive for client connection as well. In any case,
how long a keep-alive connection will be in
On 1/19/16 2:43 PM, Mark Thomas wrote:
On 19/01/2016 22:36, Rallavagu wrote:
Also, it could be keep-alive for client connection as well. In any case,
how long a keep-alive connection will be in this state by default? Thanks.
This behaviour is entirely normal. Why are you concerned about it
Also, it could be keep-alive for client connection as well. In any case,
how long a keep-alive connection will be in this state by default? Thanks.
On 1/19/16 2:24 PM, Rallavagu wrote:
Thanks Mark. It seems to be running for almost 10 seconds and there is a
Load Balancer between. Is it a
Thanks Mark. It seems to be running for almost 10 seconds and there is a
Load Balancer between. Is it a suspect?
On 1/19/16 2:09 PM, Mark Thomas wrote:
On 19/01/2016 21:42, Rallavagu wrote:
I have this long running thread. It appears to be reading but the stack
trace does not give much of a
I have this long running thread. It appears to be reading but the stack
trace does not give much of a clue. Could anyone help with where to
start? Thanks.
Tomcat 7.0.42 with JDK 7
"tomcat-exec-2655" daemon prio=10 tid=0x7fc459061000 nid=0x6a58
runnable [0x7fc4a67e6000]
java.lang.T
I have this long running thread. It appears to be reading but the stack
trace does not give much of a clue. Could anyone help with where to
start? Thanks.
"tomcat-exec-2655" daemon prio=10 tid=0x7fc459061000 nid=0x6a58
runnable [0x7fc4a67e6000]
java.lang.Thread.State: RUNNABLE
This usually means that "client" has disconnected before the request
could be completed. Generally, this might happen when a user navigates
away from a web page before it is completely rendered. You might want to
gather more information to understand this better.
On 10/26/15 7:15 AM, Yogesh Pa
Please take a look at Memory Analyzer tool
(http://www.eclipse.org/mat/). Run the app and take the heap dump while
app is running and use the tool to analyze it. You could use VisualVM
with plugins to get instrumentation or you could use hprof
(http://docs.oracle.com/javase/7/docs/technotes/sam
Tomcat 7.0.14
Looking into a thread dump (after tomcat stopped accepting connections)
http-bio-8080-acceptor is missing while http-bio-8080-AsyncTimeout
thread is present.
The application is deployed to a non root context while ROOT is served
by tomcat's default ROOT app.
What could have c
Logs are written to local disk. Don't have data on disk stats. Thanks.
> On May 5, 2014, at 7:19 AM, Daniel Mikusa wrote:
>
>> On May 3, 2014, at 9:08 PM, Rallavagu wrote:
>>
>> Here is the thread BLOCKED waiting on another lock.
>>
>> &qu
oolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Locked ownable synchronizers:
- <0x0007e1b0f8d0> (a
java.util.concurrent.ThreadPoolExecutor$Worker)
On 5/2/14, 9:19 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: S
1247)
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:101)
On 5/2/14, 9:19 PM, Christopher Schultz wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Rallavagu,
On 5/2/14, 6:22 PM, Rallavagu wrote:
Tomcat Version: 7.0.47 JVM Version: 1.7.0_51-b13
I see many blocked threads (90) in the thread
All,
Tomcat Version: 7.0.47
JVM Version: 1.7.0_51-b13
I see many blocked threads (90) in the thread dump. There are mainly two
monitors that block 69 threads.
One of them is below. It appears that it is simply trying to log.
34 matches
Mail list logo