Hi, I ran the ceph osd force-create-pg command in luminious 12.2.2 to recover a 
failed pg, and it
Instantly caused all of the monitor to crash, is there anyway to revert back to 
an earlier state of the cluster ?
Right now, the monitors refuse to come up, the error message is as follows:
I’ve filed a ceph ticket for the crash, but just wonder if there is a way to 
get the cluster back up ?

https://tracker.ceph.com/issues/22847


--- begin dump of recent events ---
     0> 2018-01-31 22:47:22.959665 7fc64350e700 -1 *** Caught signal (Aborted) 
**
in thread 7fc64350e700 thread_name:cpu_tp

ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
1: (()+0x8eae11) [0x55f1113fae11]
2: (()+0xf5e0) [0x7fc64aafa5e0]
3: (gsignal()+0x37) [0x7fc647fca1f7]
4: (abort()+0x148) [0x7fc647fcb8e8]
5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x284) 
[0x55f1110fa4a4]
6: (()+0x2ccc4e) [0x55f110ddcc4e]
7: (OSDMonitor::update_creating_pgs()+0x98b) [0x55f11102232b]
8: (C_UpdateCreatingPGs::finish(int)+0x79) [0x55f1110777b9]
9: (Context::complete(int)+0x9) [0x55f110ed30c9]
10: (ParallelPGMapper::WQ::_process(ParallelPGMapper::Item*, 
ThreadPool::TPHandle&)+0x7f) [0x55f111204e1f]
11: (ThreadPool::worker(ThreadPool::WorkThread*)+0xa8e) [0x55f111100f1e]
12: (ThreadPool::WorkThread::entry()+0x10) [0x55f111101e00]
13: (()+0x7e25) [0x7fc64aaf2e25]
14: (clone()+0x6d) [0x7fc64808d34d]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to 
interpret this.

--
Efficiency is Intelligent Laziness
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to