Barring a newly-introduced bug (doubtful), that assert basically means that your computer lied to the ceph monitor about the durability or ordering of data going to disk, and the store is now inconsistent. If you don't have data you care about on the cluster, by far your best option is: 1) Figure out what part of the system is lying about data durability (probably your filesystem or controller is ignoring barriers), 2) start the Ceph install over It's possible that the ceph-monstore-tool will let you edit the store back into a consistent state, but it looks like the system can't find the *initial* commit, which means you'll need to manufacture a new one wholesale with the right keys from the other system components.
(I am assuming that the system didn't crash right while you were turning on the monitor for the first time; if it did that makes it slightly more likely to be a bug on our end, but again it'll be easiest to just start over since you don't have any data in it yet.) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Sun, Jun 8, 2014 at 10:26 PM, Mohammad Salehe <sal...@gmail.com> wrote: > Hi, > > I'm receiving failed assertion in AuthMonitor::update_from_paxos(bool*) > after a system crash. I've saved a complete monitor log with 10/20 for 'mon' > and 'paxos' here. > There is only one monitor and two OSDs in the cluster as I was just at the > beginning of deployment. > > I will be thankful if someone could help. > > -- > Mohammad Salehe > sal...@gmail.com > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com