I have a producer which sends persistent messages in /batches/ to a queue
leveraging JMS transaction.

I have tested and found that /Producer Flow Control/ is applied when using a
batch size of 1. I could see my producer being throttled as per the memory
limit I have configured for the queue. Here's my Producer Flow Control
configuration:

    <policyEntry queue="foo" optimizedDispatch="true"
         producerFlowControl="true" memoryLimit="1mb">
    </policyEntry>

The number of pending messages in the queue are in control which I see as
the evidence for /Producer Flow Control/ in action.
 
However, when the batch size is increased to 2, I found that this memory
limit is not respected and the producer is NOT THROTTLED at all. The
evidence being the number of pending messages in the queue continue to
increase till it hits the /storeUsage/ limit configured.

I understand this might be because the messages are sent in asynchronous
fashion when the batch size is more than 1 even though I haven't explicitly
set /useAsyncSend/ to /true/.

ActiveMQ's  Producer Flow Control documentation
<http://activemq.apache.org/producer-flow-control.html>   mentions that to
throttle asynchronous publishers, we need to configure /Producer Window
Size/ in the producer which shall force the Producer to wait for
acknowledgement once the window limit is reached.

However, when I configured /Producer Window Size/ in my producer and
attempted to send messages in batches, an exception is thrown and no
messages were sent.

This makes me think and ask this question, "Is it possible to configure
/Producer Window Size/ while sending persistent messages in /batches/?".

If not, then what is the correct way to throttle the producers who send
persistent messages in /batches/?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/What-is-the-correct-way-to-throttle-ActiveMQ-producers-who-send-persistent-messages-in-batches-to-a--tp4701204.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to