On Sat, May 3, 2014 at 4:01 AM, Indra Pramana wrote:
> Sorry forgot to cc the list.
>
> On 3 May 2014 08:00, "Indra Pramana" wrote:
>>
>> Hi Andrey,
>>
>> I actually wanted to try this (instead of remove and readd OSD) to avoid
>> remapping of PGs to other OSDs and the unnecessary I/O load.
>>
>>
Hi Ceph,
This month Ceph User Committee meeting was about:
Elections
RedHat and Inktank
CephFS
Meetings
You will find an executive summary at:
https://wiki.ceph.com/Community/Meetings/Ceph_User_Committee_meeting_2014-05-02
The full log of the IRC conversation is also includ
This is all on firefly rc1 on CentOS 6
I had an osd getting overfull, and misinterpreting directions I downed
it then manually removed pg directories from the osd mount. On restart
and after a good deal of rebalancing (setting osd weights as I should've
originally), I'm now at
cluster de
Craig,
Thanks for your response. I have already marked osd.6 as lost, as
you suggested. The problem is that it is still querying osd.8 which is
not lost. I don't know why it is stuck there. It has been querying osd.8
for 4 days now.
I also tried deleting the broken RBD image but the op
Dear all,
Would like to share after I tried yesterday, this doesn't work:
> - ceph osd set noout
> - sudo stop ceph-osd id=12
> - Replace the drive, and once done:
> - sudo start ceph-osd id=12
> - ceph osd unset noout
Once drive replaced, we need to ceph-deploy zap and prepare the new drive,
an