Hello,

I have 6 OSDs on two hosts stuck at 10.2.2 version because of xfs
corruptions (the ceph-osd services froze while restarting after the upgrade
and the ceph-osd processes ended in D state).
Because I had to run xfs_repair with the -L argument all those osds are
crashing and I cannot update
the osd data to 10.2.3. Also I have some PGs down.
I tried to run ceph-objectstore-tool for the PGs that are down but the tool
 keeps crashing with the following output:

osd/PG.cc: In function 'static int PG::peek_map_epoch(ObjectStore*, spg_t,
epoch_t*, ceph::bufferlist*)' thread 7f4709c5d800 time 2016-11-07
11:15:45.696078
osd/PG.cc: 2915: FAILED assert(values.size() == 2)
 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x8b) [0x5633e545b12b]
 2: (PG::peek_map_epoch(ObjectStore*, spg_t, unsigned int*,
ceph::buffer::list*)+0x6c4) [0x5633e4e03454]
 3: (main()+0x3f55) [0x5633e4d75405]
 4: (__libc_start_main()+0xf5) [0x7f4706913ec5]
 5: (()+0x361957) [0x5633e4dbf957]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
to interpret this.
*** Caught signal (Aborted) **
 in thread 7f4709c5d800 thread_name:ceph-objectstor
 ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
 1: (()+0x91a302) [0x5633e5378302]
 2: (()+0x10340) [0x7f47084f5340]
 3: (gsignal()+0x39) [0x7f4706928cc9]
 4: (abort()+0x148) [0x7f470692c0d8]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char
const*)+0x265) [0x5633e545b305]
 6: (PG::peek_map_epoch(ObjectStore*, spg_t, unsigned int*,
ceph::buffer::list*)+0x6c4) [0x5633e4e03454]
 7: (main()+0x3f55) [0x5633e4d75405]
 8: (__libc_start_main()+0xf5) [0x7f4706913ec5]
 9: (()+0x361957) [0x5633e4dbf957]
Aborted (core dumped)

Do I need to copy an ceph-objectstore-tool from 10.2.2 version for running
the export ? Are there any other options ?

Thanks ,
Simion Rad
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to