> stack traces for regular/large messages attached Email attachments are scrubbed so none came through.
> Service is running within a docker container and folder containing the journal is mapped to the host machine. Could you be more specific here? How exactly is the folder mapped? Is a networked file-system involved here? > After some debugging we came to the conclusion that either threads writing to the journal were blocked for an extended period of time, or journal compact operation lasted a long time/was blocked for some reason and held write lock on journal during that time. What specifically led you to this conclusion? It's hard to offer insight without additional details, especially thread dumps from the time of the incident. Have you seen this issue just once in over a year of service? Justin On Mon, Nov 18, 2019 at 11:34 AM Mario Mahovlić <mariomahov...@gmail.com> wrote: > We run Artemis embedded on our Spring service, it ran ok for over a year, > however at some point we started getting timeout exceptions when producing > messages to the queue. (stack traces for regular/large messages attached). > > We produce both regular and large messages to the queue and we got > timeouts for both types (large messages are ~130kb on average). Message > production rate to the queue at the time of incident was ~100k per hour. > > Artemis is running in persistent mode using file journal on disk. As > mentioned in the title no error or warn level logs were logged on artemis > server side and timeouts stopped after service restart. > > > Service is running within a docker container and folder containing the > journal is mapped to the host machine. > > Metrics for the node on which service was running show no disk I/O issues > at that time. > > Artemis version: 2.6.4, Spring boot version: 2.1.5.RELEASE > > Relevant artemis settings (rest of the settings are default): > > durable: true > max-size-bytes : 1GB > address-full-policy: FAIL > journal-sync-non-transactional : false > journal-sync-transactional: false > > If more info is needed we will try to provide it on request. > > After some debugging we came to the conclusion that either threads writing > to the journal were blocked for an extended period of time, or journal > compact operation lasted a long time/was blocked for some reason and held > write lock on journal during that time. > > Unfortunately we took no thread dumps during the incident to see where > exactly the threads were stuck. We didn't manage to find any similar > incidents reported on these boards so we would like to check out if anyone > has any other idea what might cause this behavior? >