Are you sure you upped the max number of fds allowable per process ?
On 18 May 2009, at 01:23, DataMover wrote:
After a week of fighting with it, no matter what was done, we could
not get
it to work.
We created a simple test, creating 1000 producers sending one
message to
1000 queues.
Each producer in its own thread.
This would fail to get connections intermittently.
Eventually we decided it was an operating system issue.
I tested it on windows with no modifications and there is NO problem.
Just wondering what linux version you were using?
We use Centos 5.3 .
What is the best linux distro to use with activemq?
Also, are there ports we should not use for transport?
Arjen van der Meijden wrote:
Well, that is really mostly the default config with some small
differences.
- I completely commented out the "destinationPolicy"-tag. This also
disables the per-queue/topic size limits.
- I upped the memoryUsage to 200 mb, the storeUsage to 1 gb and the
tempUsage to 1 gb.
- I changed the connector uri's (for stomp and openwire) to contain
"?transport.closeAsync=false".
These settings aren't really well thought through and only aimed at
our
very high connect/send/disconnect rate, they're just changes that
should
disable or enlarge some of the limitations I was running in to.
And as you could see from the issue-report, I used a different
JAVA_OPTS
to allow for some larger heap and such.
Best regards,
Arjen
On 11-5-2009 9:29, DataMover wrote:
I looked at that issue url you gave and wow, had a lot of great
info.
Any chance one could get a copy of the configuration xml file you
created
that solved the issue for you.
Just to get some ideas.
I had upped the memory limits via the etc security limit file and
that at
least seemed to increase the load and slow the system down. Have not
tried
it again after that.
As far as upping the queue sizes, is there a limit?
Are there best practices anywhere?
Arjen van der Meijden wrote:
There may be at one or more of these three issues that I ran into:
- You actually have a too low setting for the open files. Try
increasing
it (see man ulimit etc, be careful that normally only root can
increase
it beyond 1024, but other programs, including su do inherit it).
- You're opening and closing connections too fast, this is what
we had:
http://issues.apache.org/activemq/browse/AMQ-1739
Adding the "?transport.closeAsync=false"-parameter to the url
helped us
here.
- You're queues may be getting larger than the limits. Especially
the
5mb per queue limit in the default configuration is easy to hit.
Once I
raised the global limits and removed the per-queue/topic limits
it has
worked stable for several months in a row (since feb 19 our single
broker has queued and dequeued over 300M tiny messages).
30 and 250 producers isn't that many, so unless they're maxing
out your
broker system on some other resource than file pointers, my guess
is the
single machine should be able to handle them.
Best regards,
Arjen
On 10-5-2009 22:03 DataMover wrote:
I have seems several posts on this but I have not been able to
solve
our
situation.
We have 30 clients (producers) working with one activemq server.
All worked amazingly well.
Then we tried a test with around 250 clients.
That would get many transport errors.
Increasing the file limits on the os caused the system to come
to a
crawl
with no benefit.
I am assuming the problem can be solved with multiple brokers
being
run.
One question is do they have to be on different machines, or can
we
have
multiple activemqs running on the same server, each listening on a
different
ip?
--
View this message in context:
http://www.nabble.com/too-many-open-files-tp23473539p23589365.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.