Do you have some scheduled services that could eat memory? I would suspect that the garbage collector is using most of that CPU. Monitor the GC and how memory is used over time. Maybe there is a mismatch between the memory configurations in Docker and the JVM.

It seems like it is repeatable, that is good for troubleshooting at least.

Mats


On 2019-05-16 03:33, JumpStart wrote:
Hi all,

My app is working brilliantly under load, but after a quiet time it can be very 
slow to respond, leading our first user of the day to tap the same thing 
multiple times, and the next thing you know is CPU hits 100% and is stuck 
there, and none of those requests returns a response. Nor do any new requests 
return a response. Apache logs show that all the requests time out after 60 
secs, unanswered, and the health checkers start messaging the support staff.

Has anyone else experienced this kind of thing?

Perhaps it’s something to do with our infrastructure? We’re running Tapestry 
from an EAR in Wildfly in Docker in an AWS EC2 instance. Also in that EC2 
instance is Apache HTTPD in Docker.

Any thoughts, please! It’s a crazy problem.

Cheers,

Geoff
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tapestry.apache.org
For additional commands, e-mail: users-h...@tapestry.apache.org

--
---------------------- Mats Andersson | Ronsoft AB | +46(0)73 368 79 82

Reply via email to