Hi Brad,

This occurred on a system under moderate load - has not happened since and
I do not know how to reproduce.

Thank you,
Alex

On Tue, Sep 22, 2015 at 7:29 PM, Brad Hubbard <bhubb...@redhat.com> wrote:

> ----- Original Message -----
>
> > From: "Alex Gorbachev" <a...@iss-integration.com>
> > To: "ceph-users" <ceph-users@lists.ceph.com>
> > Sent: Wednesday, 9 September, 2015 6:38:50 AM
> > Subject: [ceph-users] OSD crash
>
> > Hello,
>
> > We have run into an OSD crash this weekend with the following dump.
> Please
> > advise what this could be.
>
> Hello Alex,
>
> As you know I created http://tracker.ceph.com/issues/13074 for this issue
> but
> the developers working on it would like any additional information you can
> provide about the nature of the issue. Could you take a look?
>
> Cheers,
> Brad
>
> > Best regards,
> > Alex
>
> > 2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >>
> > 10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271
> cs=251
> > l=0 c=0x10d34580).fault with nothing to send, going to standby
> > 2015-09-07 14:56:16.948998 7fae643e8700 -1 *** Caught signal
> (Segmentation
> > fault) **
> > in thread 7fae643e8700
>
> > ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)
> > 1: /usr/bin/ceph-osd() [0xacb3ba]
> > 2: (()+0x10340) [0x7faea044e340]
> > 3:
> >
> (tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*,
> > unsigned long, int)+0x103) [0x7faea067fac3]
> > 4: (tcmalloc::ThreadCache::ListTooLong(tcmalloc::ThreadCache::FreeList*,
> > unsigned long)+0x1b) [0x7faea067fb7b]
> > 5: (operator delete(void*)+0x1f8) [0x7faea068ef68]
> > 6: (std::_Rb_tree<int, std::pair<int const, std::list<Message*,
> > std::allocator<Message*> > >, std::_Select1st<std::pair<int const,
> > std::list<Message*, std::allocator<Message*> > > >, std::less<int>,
> > std::allocator<std::pair<int const, std::list<Message*,
> > std::allocator<Message*> > > >
> >::_M_erase(std::_Rb_tree_node<std::pair<int
> > const, std::list<Message*, std::allocator<Message*> > > >*)+0x58)
> [0xca2438]
> > 7: (std::_Rb_tree<int, std::pair<int const, std::list<Message*,
> > std::allocator<Message*> > >, std::_Select1st<std::pair<int const,
> > std::list<Message*, std::allocator<Message*> > > >, std::less<int>,
> > std::allocator<std::pair<int const, std::list<Message*,
> > std::allocator<Message*> > > > >::erase(int const&)+0xdf) [0xca252f]
> > 8: (Pipe::writer()+0x93c) [0xca097c]
> > 9: (Pipe::Writer::entry()+0xd) [0xca40dd]
> > 10: (()+0x8182) [0x7faea0446182]
> > 11: (clone()+0x6d) [0x7fae9e9b100d]
> > NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
> to
> > interpret this.
>
> > --- begin dump of recent events ---
> > -10000> 2015-08-20 05:32:32.454940 7fae8e897700 0 --
> 10.80.4.25:6830/2003934
> > >> 10.80.4.15:6806/4003754 pipe(0x1992d000 sd=142 :6830 s=0 pgs=0 cs=0
> l=0
> > c=0x12bf5700).accept connect_seq 816 vs existing 815 state standby
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to