the case where it is necessary is when a client connection drops between sending an ack and the broker receiving the ack. So the broker will redispatch the message when the consumer reconnects (when failover kicks in). If there is no failover the consumer and connection will die. If the ack is in a transaction, the duplicate can be tracked if the consumer id matches and the transaction can be recovered/recreated by failover. If the id does not match, then the transaction is rolled back in this case.
The question for your app is, can the client accept a duplicate receipt if an ack is lost? If so, then disable checkForDuplicates. Also note, that the audit bit array is per producer (the message ids are assigned by producers) so you may be able to reduce the number or tracked producers to cause the bitarraybin to get dropped. There are some options to control that see: setAuditMaximumProducerNumber On 13 January 2015 at 14:42, Hendley, Sam <sam.hend...@sensus.com> wrote: > Due to https://issues.apache.org/jira/browse/AMQ-5016 our large and stable > systems will hit the 2^31 + 1 message limit after a few days. I gather that > the "checkForDuplicates" flag will prevent the "Duplicate dispatch on > connection" issue. I can set that flag on just our highest volume queues but > that just delays the problem a bit (when a lower volume queue hits the same > limit), I would like to set that flag system wide but would like to know more > about what I am disabling. > > I haven't been able to find a lot of documentation on this flag. What are the > risks of disabling duplicate checking system wide? Are there any cases where > this flag must stay enabled? Our usage of activemq is pretty basic, all > AUTO_ACK and no transactions so I am guessing we should be safe but I can't > be sure. We do use failover:tcp: connector but only for the reconnecting, we > only have a single broker. > > Sam