Re: [ceph-users] osd down/autoout problem
On Thu, 15 May 2014, Cao, Buddy wrote:
> ?Too many open files not handled on operation 24 (541468.0.1, or op 1,
> counting from 0)
You need to increase the 'ulimit -n' max open files limit. You can do this in
ceph.conf with 'max
On May 15, 2014, at 6:06 PM, Cao, Buddy wrote:
> Hi,
>
> One of the osd in my cluster downs w no reason, I saw the error message in
> the log below, I restarted osd, but after several hours, the problem come
> back again. Could you help?
>
> “Too many open files not handled on operation 24
On Thu, 15 May 2014, Cao, Buddy wrote:
> ?Too many open files not handled on operation 24 (541468.0.1, or op 1,
> counting from 0)
You need to increase the 'ulimit -n' max open files limit. You can do
this in ceph.conf with 'max open files' if it's sysvinit or manually in
/etc/init/ceph-osd.con
"Too many open files not handled on operation 24"
This is the reason. You need to increase the fd size limit.
On Thu, May 15, 2014 at 6:06 PM, Cao, Buddy wrote:
> Hi,
>
>
>
> One of the osd in my cluster downs w no reason, I saw the error message in
> the log below, I restarted osd, but after se
Hi,
One of the osd in my cluster downs w no reason, I saw the error message in the
log below, I restarted osd, but after several hours, the problem come back
again. Could you help?
"Too many open files not handled on operation 24 (541468.0.1, or op 1, counting
from 0)
-96> 2014-05-14 22:12: