Hello Arnab,
thanks for your. I've checked ant the limit is set correct.
Nevertheless, during my debugging I found out, that I interpreted
the error wrong. I use a Java client to query the service and I always
got a AxisFault exception. So I interpreted that the exception is from
the service, but the client is producing it. So I have to increment the
limit on client side and everything works fine.
Again thank you for your help
Michael
Arnab Ganguly wrote:
Hi Michael,
Did you tried the following "ulimit -S -n `ulimit -H -n` ?
This will assign the max open file hard limits value to your soft limit.
Can you check the output of ulimit -H -n and ulimit -S -n?Former gives
your hard and the latter gives your soft limits respectively.
Even if the above step is giving problem then increase the hard limit
to max value as per ur system should be something like 999999.So put
this value to ur soft limit by the above procedure in the apachectl
script and try out.You print the value of ulimit -S -n in you
apachectl script just to make sure the change is taking place.Make
sure the shell which runs the apachectl gets the incremented FDS.
Thanks
Arnab
On Wed, Aug 27, 2008 at 12:40 AM, Michael Sutter
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
Hello Arnab,
sorry for answering so late, but it took some time to test it.
At the beginning of the apachectl script I put
ulimit -n 50000
and restarted the daemon with the apachectl script.
Nevertheless, it don't changed the behavior. The service
was running for about 7 - 8 hours and after that I error
message.
Maybe you have another hint for me?
Kind regards
Michael
Arnab Ganguly wrote:
Can you try assigning the soft limit value to hard limit value
and restart the server.You can put this in apachectl script so
that it gets affected for the shell used for Apache.
Thanks
Arnab
On Mon, Aug 25, 2008 at 6:47 PM, Michael Sutter
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
Hello list,
I have a strange problem with my httpd daemon and hopefully
somebody can help me.
I'm running a Apache 2.0.49 on a Suse 9.1 and have the
mod_axis2 deployed. Inside
Axis2 I'm running a service which is queried every ten seconds.
After running for some hours, sometimes 2, sometimes 4,
sometimes more I always got
a exception: Too many open files. The exception is not
written to the error log, it is the
return value of my service. I also have no entry at the
corresponding time in my access.log,
so I think it is thrown before the service is accessed.
I searched through the list and found, that normally the
solution is to increase the limit
of open files. So I added in /etc/security/limits.conf
* soft nofile 8192
* hard nofile 50000
logged out and in again.
For my understanding this should increase the number of open
files for every user.
Nevertheless, this don't changed the behaviour. I always got
the exception. So I also
added ulimit -n 8192 to my init script, which shows the same
behaviour.
I also monitored the number of open files on the system. It
is always about 2000 - much
less then I have declared in the configuration. The httpd
daemon normally has 10 process
and every process has opened about 90 - 95 files. So I'm also
not on the configured limit.
Has anybody some idea what I'm doing wrong or how I can solve
the problem?
Kind regards
Michael
---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP
Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>
" from the digest:
[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: [EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>