The logic is not complex.. the whole thing is controlled here. The source code will be a very good documentation in this case:
https://github.com/apache/activemq-artemis/blob/1ba0b65babf298e4dc47951aa6998b2d0ac02be6/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/impl/QueueImpl.java#L3469-L3493 On 2025/01/22 16:25:19 Clebert Suconic wrote: > sure... > > > I originally did not want to implement prefetch.. but my user had a case > where they needed a soft limit and a hard limit > > > At max-read, the system will stop fetching for data... period... no more > messages will come. > > > With Prefetch.. if you receive the messages on the consumer, it will still > request for more messages, but if you don't ack them.. things will stop at > max-read.. there will be a log.warn in the system as you would starve > otherwise. > > > Example: > > > Say you configured max-read at 100, and then you have a consumer doing the > following: > > Session session = connection.createSession(... whatever you do to create a > transaction section); > consumer = session.createConsumer > > for (int I = 0; I < 1000; I++) { > consumer.receive(); // notice I'm blocking here forever if no messages > are coming. > } > // The system will never reach here, but max-read=100 and there are 100 > messages pending. > session.commit(); > > As you take messages out of the queue.. the system will try to keep 10 > messages in memory... > > > But as you don't ack these messages the system will block at max-read. > > > > Makes sense? > > > Also there are the bytes versions of these metrics where I use the payload to > manage the flow. The system will always reach to whatever happens first on > the rules. > > On 2025/01/15 06:32:52 s.go...@inform-technology.de wrote: > > Hey Clebert, > > > > thank you for the tip about ARTEMIS-4447. > > If I understand correctly, these settings will influence how many messages > > of a persistent page are loaded into memory. > > This sounds quite likely. We have had similar issues in the past with many > > consumers that were unable to dispatch the messages to their clients > > leading to massive paging and OOM. > > As we are still on 2.28 I will recommend the team to migrate to a more > > recent version and have a look into these address settings. > > But after a first look into those settings, I cannot quite figure out the > > difference between max-read-page-messages and prefetch-page-messages. Both > > claim to somehow limit the amount of messages read into memory. > > > > Can you explain the difference? > > > > Many thanks > > > > Sebastian > > > > -----Ursprüngliche Nachricht----- > > Von: Clebert Suconic <clebertsuco...@apache.org> > > Gesendet: Dienstag, 14. Januar 2025 18:11 > > An: users@activemq.apache.org > > Betreff: Re: Artemis - Help understanding heap dump > > > > as part of ARTEMIS-4447, I added parameters to address-settings for > > prefetching. > > > > Also there was some flow changes based on an issue I had with an user, as > > they had consumers prefetching for a long time issuing more messages out of > > paging into memory. > > > > > > You can now configure prefetch and such conditions would just keep the > > server from running out of memory. > > > > > > This is the most likely scenario why you're having OME issues as it's > > similar to what I have dealt with my user. > > > > > > > > > > On 2025/01/14 08:48:39 s.go...@inform-technology.de wrote: > > > Hello group, > > > > > > > > > > > > currently I am analyzing an OOM crash of a productive broker (ActiveMQ > > > Artemis 2.28). The broker ran under 64 bit OpenJDK 17.0.9+9 (Temurin) > > > with 4GB of heap configured via -Xmx VM option and G1 garbage collector. > > > > > > In the analyzer histogram I can see, that there are 5 big consumers of > > > the VM’s heap. Four of those are very much the same: > > > > > > > > > > > > 1. One instance of > > > org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl > > > occupies 777.531.896 (18,18 %) bytes. The memory is accumulated in one > > > instance of java.util.TreeMap$Entry which occupies 327.069.808 (7,65 %) > > > bytes. > > > 2. One instance of > > > org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl > > > occupies 777.531.896 (18,18 %) bytes. The memory is accumulated in one > > > instance of java.util.TreeMap$Entry which occupies 327.069.808 (7,65 %) > > > bytes. > > > 3. One instance of > > > org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl > > > occupies 777.531.896 (18,18 %) bytes. The memory is accumulated in one > > > instance of java.util.TreeMap$Entry which occupies 327.069.808 (7,65 %) > > > bytes. > > > 4. One instance of > > > org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl > > > occupies 777.531.896 (18,18 %) bytes. The memory is accumulated in one > > > instance of java.util.TreeMap$Entry which occupies 327.069.808 (7,65 %) > > > bytes. > > > > > > > > > > > > The fifth seems to be closely related to these four: > > > > > > 19 instances of java.util.TreeMap$Entry occupy 777.531.504 (18,18 %) > > > bytes. > > > > > > Biggest instances: > > > > > > * java.util.TreeMap$Entry @ 0x7584c1ce8 - 231.178.240 (5,41 %) > > > bytes. > > > > > > * java.util.TreeMap$Entry @ 0x7094185d8 - 220.148.272 (5,15 %) > > > bytes. > > > > > > * java.util.TreeMap$Entry @ 0x77c22a440 - 116.097.024 (2,71 %) > > > bytes. > > > > > > * java.util.TreeMap$Entry @ 0x796469d78 - 103.186.432 (2,41 %) > > > bytes. > > > > > > * java.util.TreeMap$Entry @ 0x7c8c1e078 - 53.215.232 (1,24 %) > > > bytes. > > > > > > Most of these instances are referenced from one instance of > > > org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1 which occupies > > > 1.282.832 (0,03 %) bytes. > > > > > > > > > > > > The analyzer states > > > > > > Common Path To the Accumulation Point: > > > > > > org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1 @ > > > 0x79e40f078 Thread-1 > > > (ActiveMQ-PageExecutor-server-org.apache.activemq.artemis.core.server. > > > impl.ActiveMQServerImpl$9@4aeaadc1) <mat://object/0x79e40f078> Thread > > > > > > > > > > > > After a restart of the broker after 6 hours the memory consumption is > > > still at < 300 mb. > > > > > > > > > > > > Can someone tell me what the cause of this OOM exception might be > > > (excessive paging, fast reconnects, zombie sessions, …)? > > > > > > > > > > > > > > > > > > Kind regards > > > Sebastian Götz > > > > > > > > > > > > > > > > > > > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org > > For additional commands, e-mail: users-h...@activemq.apache.org For further > > information, visit: https://activemq.apache.org/contact > > > > > > > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org > > For additional commands, e-mail: users-h...@activemq.apache.org > > For further information, visit: https://activemq.apache.org/contact > > > > > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org > For additional commands, e-mail: users-h...@activemq.apache.org > For further information, visit: https://activemq.apache.org/contact > > > --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@activemq.apache.org For additional commands, e-mail: users-h...@activemq.apache.org For further information, visit: https://activemq.apache.org/contact