3 different
>> _host_ by default.
>>
>> Regards,
>> Burkhard
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
vms
> min_size
> >>>> from 2 may help; search ceph.com/docs for 'incomplete')
> >>>>
> >>>> Directory for PG 3.1683 is present on OSD 166 and containes ~8GB.
> >>>>
> >>>> We didn't try setting min_size to 1 ye
efly)
>> > crush map has straw_calc_version=0
>> > monmap e1: 1 mons at {nodeB=155.232.195.4:6789/0}
>> > election epoch 7, quorum 0 nodeB
>> > osdmap e80: 10 osds: 5 up, 5 in; 558 remapped pgs
>> > flags sortbitw
t;? Usually people set the ID to the
> hostname. Check it in /var/lib/ceph/mds
>
> John
>
> On Mon, Apr 11, 2016 at 9:44 AM, 施柏安 wrote:
>
>> Hi cephers,
>>
>> I was testing CephFS's HA. So I shutdown the active mds server.
>> Then the one of standby
-mds start id=0'. It can't start and
just show 'ceph-mds stop/waiting'
Is that the bug or I do wrong operation?
--
Best regards,
施柏安 Desmond Shih
技術研發部 Technical Development
<http://www.inwinstack.com/>
迎棧科技股份有限公司
│ 886-975-857-982
│ desmond.s@inwinstack
│ 886-2-7738
If Ceph cluster stuck in recovery state?
Did you try command "ceph pg repair " or "ceph pg query" to
trace its state?
2016-03-24 22:36 GMT+08:00 yang sheng :
> Hi all,
>
> I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are
> up and in). I have 3 pools in my cluster (image
It seems that you only have two host in your crush map. But the default
ruleset would separate the object by host.
If you set size 3 for pools, then there would be one object can't build
because you only have two hosts.
2016-03-23 20:17 GMT+08:00 Zhang Qiang :
> And here's the osd tree if it ma
ond,
> this seems to be much to do for 90 OSDs. And possible a few mistakes in
> typing.
> Every change of disk needs extra editing too.
> This weighting was automatically done in former versions.
> Do you know why and where this changed or was i faulty at some point?
>
> Markus
&g
How should i change It?
> I never had to edit anything in this area in former versions of ceph. Has
> something changed?
> Is any new parameter nessessary in ceph.conf while installing?
>
> Thank you,
> Markus
>
> Am 21.03.2016 um 10:34 schrieb 施柏安:
>
> It seems that there
Fri, Mar 18, 2016 at 1:33 AM, 施柏安 wrote:
> > Hi John,
> > How to set this feature on?
>
> ceph mds set allow_new_snaps true --yes-i-really-mean-it
>
> John
>
> > Thank you
> >
> > 2016-03-17 21:41 GMT+08:00 Gregory Farnum :
> >>
> >> O
>
> Which makes me wonder if we ought to be hiding the .snaps directory
> entirely in that case. I haven't previously thought about that, but it
> *is* a bit weird.
> -Greg
>
> >
> > John
> >
> > On Thu, Mar 17, 2016 at 10:02 AM, 施柏安 wrote:
> >> H
Hi all,
I encounter a trouble about cephfs sanpshot. It seems that the folder
'.snap' is exist.
But I use 'll -a' can't let it show up. And I enter that folder and create
folder in it, it showed something wrong to use snapshot.
Please check : http://imgur.com/elZhQvD
__
12 matches
Mail list logo