Hi William-

A lot goes into performance tuning a broker for various use cases. 5 minutes is 
not a long time to run a test either, I suggest running for a longer period of 
time for caches to be fully saturated.

First tip, I recommend measuring throughput in MB/s. Your two tests:

1024 x 1,700 = 1,740,800  MB/s
1,024,000 x 19 = 19,456,000  MB/s

As you can see here, the 1MB message test is sending ~11x more data through the 
broker, even though the message-rate is lower.

If you are using a transaction-per-message that is a pattern with very high 
network latency— which is what I suspect you are seeing.

- Matt Pavlovich

> On Feb 27, 2025, at 9:39 AM, William Crowell <wcrow...@perforce.com.INVALID> 
> wrote:
> 
> Disclaimer before reading this: This is a tuning issue and not a performance 
> problem with ActiveMQ.
> 
> 
> 
> Description of the issue:
> 
> 
> 
> I believe I have a tuning problem with ActiveMQ 6.1.5 on Rocky Linux 9.5 
> running on a virtual machine with 8 vCPUs and 32GB of RAM.  The underlying 
> disk is attached SSD.
> 
> 
> 
> I am using the ActiveMQ Classic Performance Module: 
> https://activemq.apache.org/components/classic/documentation/activemq-classic-performance-module-users-manual
> 
> 
> 
> I am running the broker and the producer on separate virtual machine 
> instances.  The producer is running on a 12 vCPU host.  The broker and 
> producer VM instances are on the same VLAN.
> 
> 
> 
> 2 scenarios:
> 
> 
> 
> 1) Running a 5 minute test with 25 producer threads with 1K payloads on a set 
> of 50 queues I observed that I averaged about 1700 transactions/messages per 
> second using the producer provided with the plugin.
> 
> 
> 
> 2) Running a 5 minute test with 25 producer threads with 1MB payloads on a 
> set of 50 queues I observed that I averaged about 19 transactions/messages 
> per second using the producer provided with the plugin.
> 
> 
> 
> I really expected the throughput to be a lot more than what I am seeing.  The 
> CPU utilization stays around 10-11% and there is plenty of free memory.  I 
> did notice the disk saturation can get up to 80-90% during a test with write 
> activity.
> 
> 
> 
> I am using the out of the box settings for the systemUsage segment in 
> activemq.xml which is 70% of the heap for memoryUsage, 100GB storeUsage, and 
> 50GB of tempUsage.
> 
> 
> 
> I am going to try the same tests with the following settings:
> 
> 
> 
> On the broker:
> 
> 
> 
> Turning off journalDiskSyncStrategy (which I am not a real fan of doing):
> 
> 
> 
> journalDiskSyncStrategy=”never”
> 
> 
> 
> On the producer client running the Maven plugin:
> 
> 
> 
> factory.optimizeAcknowledge=true
> 
> factory.alwaysSessionAsync=false
> 
> factory.prefetchPolicy.queuePrefetch=2000
> 
> factory.prefetchPolicy.queueBrowserPrefetch=1000
> 
> factory.useAsyncSend=true
> 
> 
> 
> Is there anything I am forgetting to increase throughput?  I feel like I am 
> leaving something silly out.
> 
> 
> 
> Regards,
> 
> 
> 
> William Crowell
> 
> 
> 
> This e-mail may contain information that is privileged or confidential. If 
> you are not the intended recipient, please delete the e-mail and any 
> attachments and notify us immediately.
> 

Reply via email to