I haven't had a chance yet. I hope to have a chance relatively soon.

Justin

On Tue, Apr 22, 2025 at 4:42 AM Luchi Mirko <mirko.lu...@siav.it> wrote:

> Hi Justin,
>
> have you had a chance to take a look at why message redistribution seems
> not to work when clustered message grouping is used?
>
> Thanks,
> Mirko
>
> ________________________________
> From: Luchi Mirko <mirko.lu...@siav.it>
> Sent: Tuesday, April 15, 2025 18:24
> To: users@activemq.apache.org <users@activemq.apache.org>
> Subject: Re: Clustered Grouping: how to make sure that each queue on the
> cluster has at least one consumer registered
>
> Hi Justin.
>
> Here you can find a sample project I created to demonstrate messages
> redistribution in a 2-nodes cluster, with clustered grouping configured:
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Fgithub.com%2Fmirkoluchi%2Factivemq-artemis-message-grouping-redistribution&e=acf6d3bc&h=f3f9414c&f=y&p=y
>
> The broker cluster is configured using the
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Fgithub.com%2Farkmq-org%2Factivemq-artemis-operator&e=acf6d3bc&h=2077f3da&f=y&p=y
> .
> In detail:
>
>   *
> the broker is configured to have 2 nodes
>   *
> persistence is enabled
>   *
> the nodes are configured to have respectively a local grouping handler and
> a remote grouping handler (configuration is done using a custom init
> container)
>   *
> a custom logging is enabled (via extra mount) to turn on TRACE logging on
> LocalGroupingHandler
>   *
> all the addresses are configured to set:
>      *
> default group buckets set to 5
>      *
> default group rebalancing enabled
>      *
> default group rebalancing pause dispatch enabled
>      *
> redistribution delay set to 0
>
> apiVersion: broker.amq.io/v1beta1
> kind: ActiveMQArtemis
> metadata:
>   name: artemis-broker
> spec:
>   ingressDomain: localhost
>   deploymentPlan:
>     size: 2
>     persistenceEnabled: true
>     initImage: localhost:32000/siav-artemis-init-grouping-handler:1.0
>     extraMounts:
>      configMaps:
>       - "artemis-logging-config"
>   console:
>      expose: true
>   addressSettings:
>     applyRule: merge_all
>     addressSetting:
>     - match: '#'
>       defaultGroupRebalance: true
>       defaultGroupBuckets: 5
>       defaultGroupRebalancePauseDispatch: true
>       redistributionDelay: 0
>
> The custom init container makes sure that the first nodes includes the
> definition of a local grouping handler and the second node of a remote
> grouping handler for the address used by the test.
> Node 1:
> <xi:includehref="amq-broker/etc/local-grouping-handler.xml"/>
> Node 2:
> <xi:includehref="amq-broker/etc/remote-grouping-handler.xml"/>
>
> The local-grouping-handler.xml is this:
> <grouping-handler xmlns="urn:activemq:core" name="my-grouping-handler">
>     <type>LOCAL</type>
>     <address>poc-index-request</address>
>     <timeout>5000</timeout>
> </grouping-handler>
>
> The remote-grouping-handler.xml is this:
> <grouping-handler xmlns="urn:activemq:core" name="my-grouping-handler">
>     <type>REMOTE</type>
>     <address>poc-index-request</address>
>     <timeout>10000</timeout>
> </grouping-handler>
>
> You can find the resulting broker configurations in files broker1.xml and
> broker2.xml in the Github repo (along with the local-grouping-handler.xml
> and remote-grouping-handler.xml files that they include).
>
>
> The sample code is contained in Maven project
> poc-artemis-message-grouping-with-redistribution.
> The project has a Main class that:
>
>   *
> starts 2 shared durable consumers (one per broker node) that in background
> consume messages published to address poc-index-request from their own
> queue poc-index-request-queue
>   *
> invokes a producer that produces a configurable number of messages
>   *
> waits a bit so that both consumers consume a few messages
>   *
> kills one consumer (and here message redistribution is expected to happen
> on the server side)
>   *
> waits until all the messages should have been consumed
>   *
> verifies whether or not all the messages have been consumed.
>
> If the test is run without message grouping (which can be done setting
> Constants.MESSAGE_GROUPING_ENABLED=false) it succeeds: after the first
> consumer is killed, all messages that are still present in its queue are
> moved to the other queue and hence consumed by the second consumer.
> If the test is run with message grouping (which can be done setting
> Constants.MESSAGE_GROUPING_ENABLED=true) it fails, because after killing
> the first consumer the messages that are in its queue are not redistributed
> to the other queue.
>
>
> Can you please give it a look and tell me if there's something I'm missing
> in the configuration?
> Can you also explain me why - at least from my understanding looking at
> the source code and at the logs - it seems that group buckets are "used"
> inside the queue, but are not taken into account by the grouping handlers?
> Or, in other words, why configuring n group buckets I end up having n group
> buckets on each broker rather than on the entire cluster? If the reason to
> use group buckets is to avoid an unbounded growth of the groupIds map, but
> grouping handlers don't use it, we will end up having an unbounded growth
> of the groups map in the grouping handler object.
>
> Thanks.
> Mirko
>
>
> ________________________________
> From: Justin Bertram <jbert...@apache.org>
> Sent: Monday, April 14, 2025 20:09
> To: users@activemq.apache.org <users@activemq.apache.org>
> Subject: Re: Clustered Grouping: how to make sure that each queue on the
> cluster has at least one consumer registered
>
> Off the top of my head I can't think of any reasons that message grouping
> necessarily wouldn't work with redistribution. Can you outline your exact
> use-case so I can test it myself. If you have a minimal, reproducible
> example (e.g. on GitHub or something) that would be ideal.
>
>
> Justin
>
> On Mon, Apr 14, 2025 at 2:20 AM Luchi Mirko <mirko.lu...@siav.it> wrote:
>
> >
> > Hi Justin,
> >
> > for what concerns connection router, I'll take a better look at it (I
> > quickly read the chapter some times ago but haven't made any serious
> > thoughts about it yet).
> >
> > For what concerns message redistribution, I had already experimented with
> > that, but it seems to me that when message grouping is enabled,
> > redistribution is not working. I thought it was an issue related to my
> > configuration, but using the same configuration and repeating the test
> > without using message grouping (i.e. as soon as I don't send messages
> with
> > group id) redistribution works like a charm.
> > Is message redistribution supposed to work even when message grouping is
> > used?
> >
> >
> > Mirko
> > ________________________________
> > From: Justin Bertram <jbert...@apache.org>
> > Sent: Friday, April 11, 2025 21:30
> > To: users@activemq.apache.org <users@activemq.apache.org>
> > Subject: Re: Clustered Grouping: how to make sure that each queue on the
> > cluster has at least one consumer registered
> >
> > > Is there a way to specify some kind of policy by which client should
> > preferably first connect to queues without consumers...
> >
> > There is no policy to change where _consumers_ get connected, but you can
> > use a connection router [1] in combination with your cluster to
> distribute
> > _connections_ in a particular way.
> >
> > In any event, if you have redistribution [2] configured then messages
> > should not build-up on any specific node as long as there is a consumer
> on
> > the subscription on at least one node in the cluster so this problem is
> > likely moot.
> >
> > I realize you're just exploring "what can be done and what can't," but
> the
> > devil is in the details with this kind of thing, and as you add layers of
> > complexity it becomes increasingly difficult to say what's possible and
> > what isn't. Furthermore, nothing scales indefinitely so the more concrete
> > you can be about your requirements the easier it will be to provide clear
> > answers. Unfortunately vague questions tend to get vague answers.
> >
> >
> > Justin
> >
> > [1]
> >
> >
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Factivemq.apache.org%2Fcomponents%2Fartemis%2Fdocumentation%2Flatest%2Fconnection-routers.html%23connection-routers&e=acf6d3bc&h=1c8f6aa2&f=y&p=y
> > [2]
> >
> >
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Factivemq.apache.org%2Fcomponents%2Fartemis%2Fdocumentation%2Flatest%2Fclusters.html%23message-redistribution&e=acf6d3bc&h=e5f6ee21&f=y&p=y
> >
> >
> > On Fri, Apr 11, 2025 at 12:55 PM Luchi Mirko <mirko.lu...@siav.it>
> wrote:
> >
> > > Hi Justin.
> > >
> > > 1)I read that a single broker is often more than enough to handle
> massive
> > > number of messages, but I haven't executed any performance test yet.
> > > Since what we're doing is evaluating Artemis, we want to know what can
> be
> > > done and what can't.
> > > Besides we don't know right now what size out future customers may be,
> so
> > > we'd like to understand if clustering is feasible in case it was
> needed.
> > > I have already experimented HA configuration, but that's high
> > > availability, not load balancing.
> > >
> > > 2) Yes, I've read them and understood. But as I said we are performing
> an
> > > evaluation and comparison with alternatives, and clustering is usually
> > one
> > > of the parameters we evaluate.
> > >
> > > Could you please suggest an approach to address my requirement?
> > > Thanks.
> > >
> > >
> > > ________________________________
> > > From: Justin Bertram <jbert...@apache.org>
> > > Sent: Friday, April 11, 2025 7:29:09 PM
> > > To: users@activemq.apache.org <users@activemq.apache.org>
> > > Subject: Re: Clustered Grouping: how to make sure that each queue on
> the
> > > cluster has at least one consumer registered
> > >
> > > Before we get into the details of your use-case I have a few
> questions...
> > >
> > >  1) Have you conducted benchmark tests and demonstrated conclusively
> that
> > > you cannot meet your performance goals with a single broker (or HA pair
> > of
> > > brokers)? If so, can you share any details about your testing and the
> > > results?
> > >  2) Have you read about and understood the performance considerations
> for
> > > clustering [1]?
> > >
> > >
> > > Justin
> > >
> > > [1]
> > >
> > >
> >
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Factivemq.apache.org%2Fcomponents%2Fartemis%2Fdocumentation%2Flatest%2Fclusters.html%23performance-considerations&e=acf6d3bc&h=9c39a666&f=y&p=y
> > >
> > > On Fri, Apr 11, 2025 at 9:39 AM Luchi Mirko <mirko.lu...@siav.it>
> wrote:
> > >
> > > > Hi,
> > > >
> > > > we are planning to adopt Artemis in our software solution as
> substitute
> > > of
> > > > the current message broker (RabbitMQ).
> > > > We are especially interested in 2 features it offers:
> > > >
> > > >   *
> > > > message grouping
> > > >   *
> > > > automatic rebalancing of message groups.
> > > >
> > > > We need message grouping because our consumers must ensure that
> certain
> > > > groups of messages are processed serially.
> > > > We need rebalancing because we deploy our solution in an elastic
> > > > environment and we want to be able to scale the application
> > horizontally.
> > > >
> > > > Ideally, we would like to use a cluster of Artemis brokers for load
> > > > balancing, so we read the clustered grouping chapter of the
> > > documentation (
> > > >
> > >
> >
> https://urlsand.esvalabs.com/?u=https%3A%2F%2Factivemq.apache.org%2Fcomponents%2Fartemis%2Fdocumentation%2Flatest%2Fmessage-grouping.html%23clustered-grouping&e=acf6d3bc&h=2f3421ef&f=y&p=y
> > > )
> > > > and we are aware of the potential pitfalls in this choice.
> > > >
> > > > To simplify let's say we have a single address to which we publish
> > > > messages (each message with groupId set), and multiple shared durable
> > > > subcriptions that load balance consumption of those messages.
> > > > Given that we will scale the application horizontally, we can assume
> > that
> > > > each node of the Artemis cluster will eventually have a queue bound
> to
> > > that
> > > > address with some groupIds sticked to it.
> > > > We know that we must make sure that each of these queues must have at
> > > > least one consumer attached, or we might find ourselves in a
> situation
> > > > where a message is routed (because of message grouping) to a queue
> that
> > > has
> > > > no consumers attached, and hence that message will never get a chance
> > to
> > > be
> > > > consumed.
> > > >
> > > > We can make sure (properly tuning autoscaling of our consumer pods)
> > that
> > > > the minimum number of consumers is always equals or greater than the
> > > number
> > > > of broker nodes (a.k.a. number of queues), but even doing so it might
> > > > happen that multiple consumers register on the same queue on the same
> > > > broker node, leaving therefore queues without consumers.
> > > > Is there a way to specify some kind of policy by which client should
> > > > preferably first connect to queues without consumers, so that
> consumers
> > > are
> > > > distributed evenly and we can guarantee that no queue will remain
> > without
> > > > consumer (given of course that we make sure to have at least a number
> > of
> > > > consumers equal to the broker nodes)?
> > > >
> > > > Thanks
> > > >
> > > >
> > > >
> > >
> >
>

Reply via email to