Re: [ceph-users] another assertion failure in monitor

2014-03-14 Thread Pawel Veselov
> This whole thing started with migrating from 0.56.7 to 0.72.2. First, we >> started seeing failed assertions of (version == pg_map.version) in >> PGMonitor.cc:273, but on one monitor (d) only. I attempted to resync the >> failing monitor (d) with --force-sync from (c). (d) started to work, but >

Re: [ceph-users] another assertion failure in monitor

2014-03-11 Thread Pawel Veselov
On Tue, Mar 11, 2014 at 9:15 AM, Joao Eduardo Luis wrote: > On 03/10/2014 10:30 PM, Pawel Veselov wrote: > >> >> Now, I'm getting this. May be any idea what can be done to straighten >> this up? >> > > This is weird. Can you please share the steps taken u

[ceph-users] another assertion failure in monitor

2014-03-10 Thread Pawel Veselov
Now, I'm getting this. May be any idea what can be done to straighten this up? -12> 2014-03-10 22:26:23.748783 7fc0397e5700 0 log [INF] : mdsmap e1: 0/0/1 up -11> 2014-03-10 22:26:23.748793 7fc0397e5700 10 send_log to self -10> 2014-03-10 22:26:23.748795 7fc0397e5700 10 log_queue is 4 l

[ceph-users] How does authentication really work?

2014-03-10 Thread Pawel Veselov
Hi. Well, I've screwed up my cluster to the point that nothing works anymore. The monitors won't start after the version update http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002468.html I've re-created the monitor fs, and the monitors are running again, but nothing authenticates t

Re: [ceph-users] mds crashes constantly

2014-03-10 Thread Pawel Veselov
table > features at the same time. :( > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Mon, Mar 10, 2014 at 12:24 PM, Pawel Veselov > wrote: > > Hi. > > > > All of a sudden, MDS started crashing, causing havoc on our depl

[ceph-users] mds crashes constantly

2014-03-10 Thread Pawel Veselov
Hi. All of a sudden, MDS started crashing, causing havoc on our deployment. Any help would be greatly appreciated. ceph.x86_64 0.56.7-0.el6 @ceph -1> 2014-03-10 19:16:35.956323 7f9681cb3700 1 mds.0.12 rejoin_joint_start 0> 2014-03-10 19:16:35.9

[ceph-users] Unclean PGs in active+degrared or active+remapped

2013-07-19 Thread Pawel Veselov
Hi. I'm trying to understand the reason behind some of my unclean pages, after moving some OSDs around. Any help would be greatly appreciated.I'm sure we are missing something, but can't quite figure out what. [root@ip-10-16-43-12 ec2-user]# ceph health detail HEALTH_WARN 29 pgs degraded; 68 pgs