Craig,
Thanks for your response. I have already marked osd.6 as lost, as
you suggested. The problem is that it is still querying osd.8 which is
not lost. I don't know why it is stuck there. It has been querying osd.8
for 4 days now.
I also tried deleting the broken RBD image but the op
It is still "querying", after 6 days now. I have not tried any
scrubbing options, I'll try them just to see. My next idea was to
clobber osd 8, the one it is supposedly "querying".
I ran into this problem too. I don't know what I did to fix it.
I tried ceph pg scrub , ceph pg de
I ran into this problem too. I don't know what I did to fix it.
I tried ceph pg scrub , ceph pg deep-scrub , and ceph osd
scrub . None of them had an immediate effect. In the end, it
finally cleared several days later in the middle of the night. I can't
even say what or when it finally cle
Craig,
Thanks for your response. I have already marked osd.6 as lost, as
you suggested. The problem is that it is still querying osd.8 which is
not lost. I don't know why it is stuck there. It has been querying osd.8
for 4 days now.
I also tried deleting the broken RBD image but the op
On 5/1/14 10:11 , kevin horan wrote:
Here is how I got into this state. I have only 6 OSDs total, 3 on one
host (vashti) and 3 on another (zadok). I set the noout flag so I
could reboot zadok. Zadok was down for 2 minutes. When it came up ceph
began recovering the objects that had not been repl
I have an issue very similar to this thread:
http://article.gmane.org/gmane.comp.file-systems.ceph.user/3197. I have
19 unfound objects that are part of a VM image that I have already
recovered from backup. If I query pg 4.30 ( the one with the unfound
objects), it says it is still querying