Re: [ceph-users] stuck unclean since forever

2016-06-23 Thread
3 different >> _host_ by default. >> >> Regards, >> Burkhard >> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Inconsistent PGs

2016-06-22 Thread
vms > min_size > >>>> from 2 may help; search ceph.com/docs for 'incomplete') > >>>> > >>>> Directory for PG 3.1683 is present on OSD 166 and containes ~8GB. > >>>> > >>>> We didn't try setting min_size to 1 ye

Re: [ceph-users] cluster ceph -s error

2016-06-20 Thread
efly) >> > crush map has straw_calc_version=0 >> > monmap e1: 1 mons at {nodeB=155.232.195.4:6789/0} >> > election epoch 7, quorum 0 nodeB >> > osdmap e80: 10 osds: 5 up, 5 in; 558 remapped pgs >> > flags sortbitw

Re: [ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread
t;? Usually people set the ID to the > hostname. Check it in /var/lib/ceph/mds > > John > > On Mon, Apr 11, 2016 at 9:44 AM, 施柏安 wrote: > >> Hi cephers, >> >> I was testing CephFS's HA. So I shutdown the active mds server. >> Then the one of standby

[ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread
-mds start id=0'. It can't start and just show 'ceph-mds stop/waiting' Is that the bug or I do wrong operation? -- Best regards, 施柏安 Desmond Shih 技術研發部 Technical Development <http://www.inwinstack.com/> 迎棧科技股份有限公司 │ 886-975-857-982 │ desmond.s@inwinstack │ 886-2-7738

Re: [ceph-users] 1 pg stuck

2016-03-24 Thread
If Ceph cluster stuck in recovery state? Did you try command "ceph pg repair " or "ceph pg query" to trace its state? 2016-03-24 22:36 GMT+08:00 yang sheng : > Hi all, > > I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are > up and in). I have 3 pools in my cluster (image

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread
​It seems that you only have two host in your crush map. But the default ruleset would separate the object by host. If you set size 3 for pools, then there would be one object can't build ​because you only have two hosts. 2016-03-23 20:17 GMT+08:00 Zhang Qiang : > And here's the osd tree if it ma

Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-22 Thread
ond, > this seems to be much to do for 90 OSDs. And possible a few mistakes in > typing. > Every change of disk needs extra editing too. > This weighting was automatically done in former versions. > Do you know why and where this changed or was i faulty at some point? > > Markus &g

Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-21 Thread
How should i change It? > I never had to edit anything in this area in former versions of ceph. Has > something changed? > Is any new parameter nessessary in ceph.conf while installing? > > Thank you, > Markus > > Am 21.03.2016 um 10:34 schrieb 施柏安: > > It seems that there

Re: [ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
Fri, Mar 18, 2016 at 1:33 AM, 施柏安 wrote: > > Hi John, > > How to set this feature on? > > ceph mds set allow_new_snaps true --yes-i-really-mean-it > > John > > > Thank you > > > > 2016-03-17 21:41 GMT+08:00 Gregory Farnum : > >> > >> O

Re: [ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
> > Which makes me wonder if we ought to be hiding the .snaps directory > entirely in that case. I haven't previously thought about that, but it > *is* a bit weird. > -Greg > > > > > John > > > > On Thu, Mar 17, 2016 at 10:02 AM, 施柏安 wrote: > >> H

[ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
Hi all, I encounter a trouble about cephfs sanpshot. It seems that the folder '.snap' is exist. But I use 'll -a' can't let it show up. And I enter that folder and create folder in it, it showed something wrong to use snapshot. Please check : http://imgur.com/elZhQvD __