Thanks a lot. Got solved.

On Wed, Jun 15, 2016 at 2:12 PM, Samuel Just <sj...@redhat.com> wrote:

> I think you hit the os process fd limit.  You need to adjust it.
> -Sam
>
> On Wed, Jun 15, 2016 at 2:07 PM, Mansour Shafaei Moghaddam
> <mansoor.shaf...@gmail.com> wrote:
> > It fails at "FileStore.cc: 2761". Here is a more complete log:
> >
> >     -9> 2016-06-15 10:55:13.205014 7fa2dcd85700 -1 dump_open_fds unable
> to
> > open /proc/self/fd
> >     -8> 2016-06-15 10:55:13.205085 7fa2cb402700  2
> > filestore(/var/lib/ceph/osd/ceph-0) waiting 51 > 50 ops || 328390 >
> > 104857600
> >     -7> 2016-06-15 10:55:13.205094 7fa2cd406700  2
> > filestore(/var/lib/ceph/osd/ceph-0) waiting 51 > 50 ops || 328389 >
> > 104857600
> >     -6> 2016-06-15 10:55:13.205111 7fa2cac01700  2
> > filestore(/var/lib/ceph/osd/ceph-0) waiting 51 > 50 ops || 328317 >
> > 104857600
> >     -5> 2016-06-15 10:55:13.205118 7fa2ca400700  2
> > filestore(/var/lib/ceph/osd/ceph-0) waiting 51 > 50 ops || 328390 >
> > 104857600
> >     -4> 2016-06-15 10:55:13.205121 7fa2cdc07700  2
> > filestore(/var/lib/ceph/osd/ceph-0) waiting 51 > 50 ops || 328390 >
> > 104857600
> >     -3> 2016-06-15 10:55:13.205153 7fa2de588700  5 -- op tracker -- seq:
> > 1476, time: 2016-06-15 10:55:13.205153, event:
> journaled_completion_queued,
> > op: osd_op(client.4109.0:1457 rb.0.100a.6b8b4567.000000006b6c
> > [set-alloc-hint object_size 4194304 write_size 4194304,write
> 1884160~4096]
> > 0.cbe1d8a4 ack+ondisk+write e9)
> >     -2> 2016-06-15 10:55:13.205183 7fa2de588700  5 -- op tracker -- seq:
> > 1483, time: 2016-06-15 10:55:13.205183, event:
> > write_thread_in_journal_buffer, op: osd_op(client.4109.0:1464
> > rb.0.100a.6b8b4567.00000000524d [set-alloc-hint object_size 4194304
> > write_size 4194304,write 3051520~4096] 0.6778c255 ack+ondisk+write e9)
> >     -1> 2016-06-15 10:55:13.205400 7fa2de588700  5 -- op tracker -- seq:
> > 1483, time: 2016-06-15 10:55:13.205400, event:
> journaled_completion_queued,
> > op: osd_op(client.4109.0:1464 rb.0.100a.6b8b4567.00000000524d
> > [set-alloc-hint object_size 4194304 write_size 4194304,write
> 3051520~4096]
> > 0.6778c255 ack+ondisk+write e9)
> >      0> 2016-06-15 10:55:13.206559 7fa2dcd85700 -1 os/FileStore.cc: In
> > function 'unsigned int
> FileStore::_do_transaction(ObjectStore::Transaction&,
> > uint64_t, int, ThreadPool::TPHandle*)' thread 7fa2dcd85700 time
> 2016-06-15
> > 10:55:13.205018
> > os/FileStore.cc: 2761: FAILED assert(0 == "unexpected error")
> >
> >  ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
> >  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> > const*)+0x78) [0xacd718]
> >  2: (FileStore::_do_transaction(ObjectStore::Transaction&, unsigned long,
> > int, ThreadPool::TPHandle*)+0xa24) [0x8b8114]
> >  3: (FileStore::_do_transactions(std::list<ObjectStore::Transaction*,
> > std::allocator<ObjectStore::Transaction*> >&, unsigned long,
> > ThreadPool::TPHandle*)+0x64) [0x8bcf34]
> >  4: (FileStore::_do_op(FileStore::OpSequencer*,
> > ThreadPool::TPHandle&)+0x17e) [0x8bd0ce]
> >  5: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa56) [0xabe326]
> >  6: (ThreadPool::WorkThread::entry()+0x10) [0xabf3d0]
> >  7: (()+0x7dc5) [0x7fa2e88f3dc5]
> >  8: (clone()+0x6d) [0x7fa2e73d528d]
> >
> >
> > On Wed, Jun 15, 2016 at 2:05 PM, Somnath Roy <somnath....@sandisk.com>
> > wrote:
> >>
> >> There should be a line in the log specifying which assert is failing ,
> >> post that along with say 10 lines from top of that..
> >>
> >>
> >>
> >> Thanks & Regards
> >>
> >> Somnath
> >>
> >>
> >>
> >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> Of
> >> Mansour Shafaei Moghaddam
> >> Sent: Wednesday, June 15, 2016 1:57 PM
> >> To: ceph-users@lists.ceph.com
> >> Subject: [ceph-users] Fio randwrite does not work on Centos 7.2 VM
> >>
> >>
> >>
> >> Hi All,
> >>
> >>
> >>
> >> Has anyone faced a similar issue? I do not have a problem with random
> >> read, sequential read, and sequential writes though. Everytime I try
> running
> >> fio for random writes, one osd in the cluster crashes. Here is the what
> I
> >> see at the tail of the log:
> >>
> >>
> >>
> >>  ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403)
> >>
> >>  1: ceph-osd() [0x9d6334]
> >>
> >>  2: (()+0xf100) [0x7fa2e88fb100]
> >>
> >>  3: (gsignal()+0x37) [0x7fa2e73145f7]
> >>
> >>  4: (abort()+0x148) [0x7fa2e7315ce8]
> >>
> >>  5: (__gnu_cxx::__verbose_terminate_handler()+0x165) [0x7fa2e7c189d5]
> >>
> >>  6: (()+0x5e946) [0x7fa2e7c16946]
> >>
> >>  7: (()+0x5e973) [0x7fa2e7c16973]
> >>
> >>  8: (()+0x5eb93) [0x7fa2e7c16b93]
> >>
> >>  9: (ceph::__ceph_assert_fail(char const*, char const*, int, char
> >> const*)+0x24a) [0xacd8ea]
> >>
> >>  10: (FileStore::_do_transaction(ObjectStore::Transaction&, unsigned
> long,
> >> int, ThreadPool::TPHandle*)+0xa24) [0x8b8114]
> >>
> >>  11: (FileStore::_do_transactions(std::list<ObjectStore::Transaction*,
> >> std::allocator<ObjectStore::Transaction*> >&, unsigned long,
> >> ThreadPool::TPHandle*)+0x64) [0x8bcf34]
> >>
> >>  12: (FileStore::_do_op(FileStore::OpSequencer*,
> >> ThreadPool::TPHandle&)+0x17e) [0x8bd0ce]
> >>
> >>  13: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa56) [0xabe326]
> >>
> >>  14: (ThreadPool::WorkThread::entry()+0x10) [0xabf3d0]
> >>
> >>  15: (()+0x7dc5) [0x7fa2e88f3dc5]
> >>
> >>  16: (clone()+0x6d) [0x7fa2e73d528d]
> >>
> >>
> >>
> >>
> >>
> >> PLEASE NOTE: The information contained in this electronic mail message
> is
> >> intended only for the use of the designated recipient(s) named above.
> If the
> >> reader of this message is not the intended recipient, you are hereby
> >> notified that you have received this message in error and that any
> review,
> >> dissemination, distribution, or copying of this message is strictly
> >> prohibited. If you have received this communication in error, please
> notify
> >> the sender by telephone or e-mail (as shown above) immediately and
> destroy
> >> any and all copies of this message in your possession (whether hard
> copies
> >> or electronically stored copies).
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to