Hi Justin,

 Yes, I can clarify the idea.

 The product is configured and sold based on the load it can handle. For
example, one client can come and ask for this product to handle 20K
requests but another client can ask for 200K requests. Here the load is
pre-determined by clients. Although the product is allowed to scale down
and up (if required) but while the product scales itself, it should prove
that it is safe to do so and ask for verification. This rarely happens in
production.

 My issue here while trying to test *ON_DEMAND* policy is, it is hard to
show/see scalability working. I am not sure after how many messages/load,
it decides to send messages to another artemis instance. We want cluster
resources to be used fairly. With the ON_DEMAND policy, I am afraid that it
will lead to higher usage of some instances than others. For example,

If we are using this url connection string

 tcp(node1:61611),tcp(node2:61616), tcp(node3:61616)

then node1 might be loaded as compared to node3.

Let me know if I am missing something here.

Regards,
Prateek Jain

--------------------------------------------------------------
EXPECTATION : Causes all troubles......
--------------------------------------------------------------


On Mon, Apr 10, 2023 at 9:01 PM Justin Bertram <jbert...@apache.org> wrote:

> No one is suggesting that you make your application code dependent on
> cluster size. I'm not sure where you're getting that idea.
>
> As noted, using ON_DEMAND with a redistribution-delay > 0 will allow
> clients to connect to any node of the cluster and consume messages sent to
> any other node in the cluster - at any time. This is the most flexible
> configuration for your clients as it makes them completely agnostic about
> the size of the cluster.
>
> I'm not sure what you mean that your clients don't scale the cluster up or
> down "for scalability." Can you clarify this?
>
>
> Justin
>
> On Mon, Apr 10, 2023 at 2:31 PM prateekjai...@gmail.com <
> prateekjai...@gmail.com> wrote:
>
> > Hi Justin,
> >
> > Our product is sold to clients based on load. They usually dont scale
> > up/down clusters (for scalability). And we dont want to make application
> > code dependent on cluster size. Because for some clients, there could be
> 2
> > node cluster but for some it could be 6 node cluster.
> >
> > Regards,
> > Prateek Jain
> > --------------------------------------------------------------
> > EXPECTATION : Causes all troubles......
> > --------------------------------------------------------------
> >
> >
> > On Mon, Apr 10, 2023 at 6:45 PM Justin Bertram <jbert...@apache.org>
> > wrote:
> >
> > > I think perhaps you misunderstood what I was recommending. You
> shouldn't
> > > need to adjust any client code. You certainly don't *need* to create a
> > > consumer on every node of the cluster as you imply. Using ON_DEMAND
> with
> > a
> > > redistribution-delay > 0 will allow clients to connect to any node of
> the
> > > cluster and consume messages sent to any other node in the cluster - at
> > any
> > > time.
> > >
> > > My point was that in order to optimize performance (i.e. the whole
> point
> > of
> > > clustering in the first place) then you should to size your cluster
> based
> > > on the actual client load. To reiterate, if you don't have enough
> clients
> > > to avoid moving messages between cluster nodes then your cluster is
> > likely
> > > too large. Another way to think about it is that if consumers are
> > starving
> > > then your cluster is likely too large. Again, this goes back to using
> > your
> > > resources effectively and efficiently. This is especially important in
> > > cloud use-cases where you may be paying by the hour to use a machine
> that
> > > isn't really necessary, but it's also important for bare-metal
> uses-cases
> > > as well to avoid expenditure to acquire the nodes in the first place.
> The
> > > simplest architecture possible is always preferable as it reduces costs
> > for
> > > development, deployment, and maintenance.
> > >
> > > If you're using a cluster of 3 pairs simply to establish a quorum to
> > avoid
> > > split-brain with replication then switch to the pluggable quorum voting
> > and
> > > use ZooKeeper instead.
> > >
> > >
> > > Justin
> > >
> > > On Mon, Apr 10, 2023 at 11:57 AM prateekjai...@gmail.com <
> > > prateekjai...@gmail.com> wrote:
> > >
> > > > Hi Justin,
> > > >
> > > >  Thanks for replying. There is a reason why I don't want to create a
> > > > consumer per broker/instance of artemis. I am trying to come up with
> an
> > > > architecture for a product where an artemis cluster can expand or
> > shrink
> > > > but it shouldn't have any impact on client code.
> > > >
> > > > Considering the suggested approach, client code has to be updated
> > > according
> > > > to the size of the cluster. So, I was thinking, could this be
> possible
> > > that
> > > > client can connect to any of the broker and then messages could be
> > routed
> > > > because consumers might not always be online. Consumers can connect
> > once
> > > in
> > > > a while. This case becomes especially important, while upgrading
> > clusers.
> > > >
> > > > Regards,
> > > > Prateek Jain
> > > >
> > > > --------------------------------------------------------------
> > > > EXPECTATION : Causes all troubles......
> > > > --------------------------------------------------------------
> > > >
> > > >
> > > > On Mon, Apr 10, 2023 at 4:39 PM Justin Bertram <jbert...@apache.org>
> > > > wrote:
> > > >
> > > > > While it's true that with ON_DEMAND you don't get the same behavior
> > as
> > > > > STRICT (as one would expect), you still get the benefit of
> > > load-balancing
> > > > > because messages will be initially distributed to nodes that have
> > > > > consumers. This is why it's called "on demand" - messages are
> > > distributed
> > > > > to where the consumers are rather than strictly round-robined
> across
> > > all
> > > > > cluster nodes.
> > > > >
> > > > > You can achieve 1, 2, & 3 with ON_DEMAND and a redistribution-delay
> > > 0
> > > > > [1]. This is the most common configuration.
> > > > >
> > > > > That said, clustering is all about increasing overall message
> > > throughput
> > > > > via horizontal scaling. In order to optimize performance you really
> > > don't
> > > > > ever want to move messages between nodes as that adds latency. You
> > want
> > > > > every node in the cluster to have enough consumers to process all
> the
> > > > > messages sent to that node. If that's not the case that's an
> > indication
> > > > > that the cluster is, in fact, too large and you're wasting
> > resources. I
> > > > > recently added a new section to the cluster documentation [2]
> > > discussing
> > > > > this very thing.
> > > > >
> > > > >
> > > > > Justin
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://activemq.apache.org/components/artemis/documentation/latest/clusters.html#message-redistribution
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/activemq-artemis/blob/main/docs/user-manual/en/clusters.md#performance-considerations
> > > > >
> > > > > On Fri, Apr 7, 2023 at 7:51 AM prateekjai...@gmail.com <
> > > > > prateekjai...@gmail.com> wrote:
> > > > >
> > > > > > Hi Roskvist,
> > > > > >
> > > > > >  I tried the ON_DEMAND value but still it doesnt work. Infact,
> with
> > > > > > on_demand value the loadbalancing stops and the whole scalability
> > > > feature
> > > > > > in the cluster becomes irrelevant.
> > > > > >
> > > > > >  IMO, if the client for a queue/topic is connected to any of the
> > > broker
> > > > > > instances then; messages should get routed to it. So, in
> nutshell;
> > > > what I
> > > > > > am trying to achieve here is -
> > > > > >
> > > > > > 1. Deploy cluster in such a way that it should be easy to scale.
> > > > > > 2. Scalability should be transparent to the client code. Clients
> > > should
> > > > > > only know about broker IPs and Ports.
> > > > > > 3. A Client should be able to send and receive messages w/o
> taking
> > > into
> > > > > > consideration; to which broker they are connected to.
> > > > > >
> > > > > > I am able to achieve most of them. It is only the receiving part
> > > which
> > > > is
> > > > > > not working as desired. In the clustered queue example, something
> > > very
> > > > > > similar is achieved but it requires, client to be connected to
> both
> > > > > > instances. But in my case, I am trying to achieve it by
> connecting
> > to
> > > > any
> > > > > > instance (live) in the cluster.
> > > > > >
> > > > > > Regards,
> > > > > > Prateek
> > > > > >
> > > > > >
> > > > > > On Fri 7 Apr 2023, 12:56 Roskvist Anton, <
> anton.roskv...@volvo.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Right, so as far as I am aware the STRICT load balancing policy
> > > does
> > > > > not
> > > > > > > allow for message redistribution, it's purpose is to divide
> > > incoming
> > > > > > > messages evenly across the cluster regardless of
> client/consumer
> > > > state.
> > > > > > > Perhaps ON_DEMAND might be better suited for your needs, or
> > > possibly
> > > > > > > OFF_WITH_REDISTRIBUTION and handling initial distribution of
> > > messages
> > > > > via
> > > > > > > client side load balancing?
> > > > > > >
> > > > > > > ________________________________
> > > > > > >
> > > > > > > This email message (including its attachments) is confidential
> > and
> > > > may
> > > > > > > contain privileged information and is intended solely for the
> use
> > > of
> > > > > the
> > > > > > > individual and/or entity to whom it is addressed. If you are
> not
> > > the
> > > > > > > intended recipient of this e-mail you may not disseminate,
> > > distribute
> > > > > or
> > > > > > > copy this e-mail (including its attachments), or any part
> > thereof.
> > > If
> > > > > > this
> > > > > > > e-mail is received in error, please notify the sender
> immediately
> > > by
> > > > > > return
> > > > > > > e-mail and make sure that this e-mail (including its
> > attachments),
> > > > and
> > > > > > all
> > > > > > > copies thereof, are immediately deleted from your system.
> Please
> > > > > further
> > > > > > > note that when you communicate with us via email or visit our
> > > website
> > > > > we
> > > > > > > process your personal data. See our privacy policy for more
> > > > information
> > > > > > > about how we process it:
> > > > https://www.volvogroup.com/en-en/privacy.html
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to