We've had similar issue in the past, which went away with the following
changes:

* adding closeAsync=false to transportConnectors
* using nio instead of tcp in transportConnectors
* setting ulimits to unlimited for activemq user
* fine-tuning kahaDB settings (enableIndexWriteAsync="true"
enableJournalDiskSyncs="false" journalMaxFileLength="256mb")
* fine-tuning topic and queue settings (destinationPolicy)
* enabling producer flow control

However, all this fine-tuning can only do so much, so ultimately we had to:

* reduce broker usage by splitting datasets onto multiple brokers
* optimize consumers to reduce the length of time a message spends on the
broker

The less messages broker has to hold on to, the less likely you'll run into
some sort of a limit.


On Tue, Aug 20, 2013 at 9:04 AM, Jerry Cwiklik <cwik...@us.ibm.com> wrote:

> Thanks, Paul. We are running on Linux (SLES). All clients use openwire. The
> broker is configured
> with producerFlowControl="false", optimizedDispatch="true" for both queues
> and topics.
>
> The openwire connector configured with transport.soWriteTimeout=45000. We
> dont use persistence for messaging. The broker's jvm is given 8Gig.
>
> JC
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Broker-leaks-FDs-Too-many-open-files-tp4670496p4670525.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
Best regards, Dmitriy V.

Reply via email to